IMPACT: Improving Learning Outcomes through Rigorous yet Practical Research

Concurrent Session 3

Brief Abstract

There is a systemic need for practical research that examines edtech efficacy. This session illustrates how schools nationwide are using a rapid-cycle evaluation tool (i.e., IMPACT™) to collect and analyze data, and discusses the evidence schools are generating to inform decisions about edtech discovery, purchasing, and evaluation.


Karl Rectanus is an educator, entrepreneur and adviser. As cofounder and CEO of Lea(R)n, an education innovation Benefit Corporation that empowers educators and their institutions to organize, streamline and analyze education technology through its research-backed LearnPlatform, Karl leads schools and districts, states and networks, and colleges and universities in their efforts to simplify edtech selection, procurement, implementation and measurement. Karl works with learning organizations and networks across the country to establish and elevate standards of practice that drive personalized learning at scale, student achievement and equity in access. Originally an educator and administrator in the US and abroad, Karl has started and led multiple education innovation organizations, and currently advises districts, states and foundations. Karl has lived, worked and studied in over 12 countries, and was an NC Teaching Fellow and James M Johnston Scholar at UNC-Chapel Hill, and graduate courses at UCLA’s Anderson Business School and CalTech Executive Extension. Karl, his wife and three daughters now live in Raleigh, NC.
Dr. Daniel Stanhope is the VP of Research and Analytics at Lea(R)n, where he leads a team of researchers, data scientists, and statisticians, and provides expertise on research design, scientific methodology, program evaluation, learning and performance measurement, and psychometrics. Daniel received his PhD in Industrial and Organizational Psychology from North Carolina State University. Dr. Stanhope has worked as an applied scientist, research methodologist, statistical consultant, and psychometrician with various private, public, and not-for-profit organizations. Daniel has published research in numerous outlets, including the Journal of Applied Psychology, Journal of Research on Technology in Education, Journal of Psychoeducational Assessment, and Journal of Personnel Psychology. Daniel has also delivered dozens of presentations at conferences across the world, serves as a reviewer for multiple academic journals, and sits on the Editorial Review Board of the Journal of Online Learning Research.

Extended Abstract


Session outcome: Attendees will learn how to save time and money on their edtech spending while generating evidence designed to inform spending allocation decisions to drive student outcomes.

Thought experiment and audience engagement: You’re tasked with selecting three new digital learning tools to improve learning outcomes for your students. Where do you start? You realize there are an overwhelming number of products. Instead of sorting through thousands of seemingly comparable products, you try to find independent research or evidence that supports the effectiveness of products—you find nothing useful. You ask peers and colleagues what they are using. You get a list of products and go to their websites to find research to support their claims — you find a lot of marketing but no independent research. You think, “Perhaps there are reviews.” You find that all your products have 4.2 stars on Apple and Google stores, which tells you nothing. You have 10-20 sales people calling you every day, you have administrators demanding an answer, you have a shrinking budget, you are running out of time, and, worst of all, have absolutely no data or evidence to make or support a decision on what to buy and use. What do you do? Administrators and educators face this onerous predicament in schools and districts across the nation ...every day.


Educational technology (edtech) is increasingly pervasive in schools. New products are released constantly, constituting billions of dollars in annual spending. Despite the immense resources invested in edtech products, research conducted to demonstrate whether or not they have a positive effect on learning outcomes is scarce. Further, there has not been a systematic way for schools, districts, states, and institutions (education organizations) to monitor and evaluate the impacts of edtech products. This has left education organizations without access to critical evidence when making high-stakes decisions that influence important education outcomes.


The education landscape has seen an influx of digital learning technologies. The digitization of classrooms and the increased adoption of edtech products is supposed to not only modernize the 21st-century classroom but also increase equity and enhance and personalize learning for students. Despite the immense amount of time, energy, and money invested in edtech implementation, there is a deficiency in research that examines the utilization and effectiveness of edtech products. Further, education organizations have been without an edtech management system that would enable them to monitor and evaluate the efficacy of their local portfolio. This has left education organizations without access to data and evidence to inform which edtech products to adopt and how to implement them in a way that maximizes impact on student learning.

Calls for the use of research and evidence in decisions related to edtech have increased. At a recent gathering of educators, researchers, funders, and policymakers—the EdTech Efficacy Research Symposium hosted by Digital Promise, Jefferson Education Accelerator, and the University of Virginia Curry School of Education—a couple mantras pervasive throughout the event were “Show Me the Evidence” and “Merit over Marketing.” The problem with rigorous research on edtech to date: It is not practical, actionable, and relevant. There is an unmanageable number of products, word of mouth regarding edtech products is unsystematic and therefore unreliable, reviews (especially generic star reviews) are not useful, vendor marketing is neither objective nor trustworthy, time is at a premium, and the limited research and evidence that exists is neither practical nor useful. The endeavor to understand edtech effectiveness has not found a way to bridge the gap between research and practice. To bridge the gap, education organizations need access to practical, actionable, relevant, and rigorous evidence that enable data-driven decisions about edtech interventions in a timely manner.

The Solution

As highlighted in the thought experiment, a decision maker faces many barriers to making informed decisions about edtech. In this session we address these issues and explain the results of our work with thousands of educators, district and state leaders, technologists, and researchers to build an online edtech management platform (LearnPlatform). This platform organizes more than 5,000 products with descriptions and feature analyses, contains a rubric that allows educators to grade products systematically and features an analytics engine that integrates and analyzes data to provide users with actionable reports and dashboards. This platform saves time and money while generating evidence designed to inform decisions.

LearnPlatform has been used by more than 100,000 educators across dozens of districts and states. This session explains how participants can (a) get visibility into the edtech their classrooms are using; (b) gather data on the extent to which products are being used; (c) gain the ability to provide quantitative ratings and qualitative feedback on products; and, (d) conduct rapid edtech evaluations on the products they are using. The analytics module of the platform, called IMPACT — allows education organizations to rapidly integrate multiple data sources to generate evidence-based insights on edtech interventions.

LearnPlatform IMPACT Analysis

Engage participants by walking through an IMPACT analysis with real data. Participants will see the system at work and will be able to examine data visualizations and reports.

IMPACT enables education organizations to integrate data from multiple sources including educator feedback, pricing data, product usage, and student achievement data to produce evidence-based reports and dashboards on product usage and effectiveness. IMPACT helps education organizations address numerous research questions by examining multiple aspects of the edtech intervention.

Types of Evidence Produced by IMPACT

Sharing reports and dashboards will engage participants by allowing them to examine data and discuss applications.

Did it work?

Overall effect size and summary of the results.

When, why, and for whom did it work?

Subgroup analysis, which examines usage and effect based on various subgrouping variables.

How much is it costing me?

Cost analysis demonstrates how much education organizations are spending total (direct and indirect costs), how much education organizations are spending on different usage groups (e.g., how much a school is spending on licenses that are never used), and how much education organizations are spending on students who use the product to fidelity (i.e., meet the dosage recommendation).

It is worth noting the technical methodology and focus on research design. This includes a power analysis displayed in the reports and an automated selection of research design based on a data validation feature. In addition, there are sections that provide more information on the methods, and a tab devoted to making the system’s underlying technical methodology transparent.


Engage participants by placing themselves figuratively in the shoes of past IMPACT users. Participants will identify with questions asked by past users, and will use the results to determine how they would make evidence-based decisions.

Lea(R)n has worked with numerous schools, districts, and networks across the nation each with widely varying needs and resources. This session will review examples of how different schools and districts have leveraged IMPACT to generate valuable insights for practical needs:

Scenario A: Controlled Trial for Single Product in Large District

Lea(R)n worked with a large district to examine the efficacy of a widely used educational technology product in elementary literacy. The sample included approximately 18 schools who used the product (treatment group; nT > 8,000) and another 18 schools who did not use the product (control group; nC > 8000). We conducted an efficacy trial using a quasi-experimental design. We tested for baseline equivalence on multiple measures, including demographic data and prior achievement on the target criteria. We also applied statistical controls (or adjustments) to partial out variance attributable to extraneous factors. We conducted cluster analysis to identify natural clusters of product usage, examined achievement for different clusters, and then generated effect sizes to determine the extent to which the product exhibited an impact on the treatment group. Additional analysis of costs informed the purchasing and budgeting decisions of the district.

Scenario B: Evaluation of Multiple Products in Charter School Network

In collaboration with 12 schools (N > 6,000 students), Lea(R)n conducted an efficacy trial on 10 math and literacy products, using three different standardized tests as achievement metrics. We tested for baseline equivalence on multiple measures, including demographic variables and prior achievement on the target criteria. We also applied statistical controls (or adjustments) to partial out the variance attributable to extraneous factors. We conducted cluster analysis to identify natural clusters of product usage, examined achievement for different clusters, and then generated effect sizes to determine the extent to which the product exhibited an impact on the intervention treatment group versus the control group. Lastly, we drilled further to test product impact by generating effect sizes based on different groupings (e.g., low achievers at baseline vs. high achievers at baseline, lower grades vs. higher grades). We also conducted multilevel modeling to determine school-level effects, and to examine factors that mediated and moderated the relationship between product use and achievement. The district used the results to determine which products to continue using and to determine how best to implement them across their various schools and student populations.