Anyone who works in education spaces likely can’t get away from the conversation around AI. The seemingly inescapable topic has arrived with an enormous splash as AI tools become more accessible to the everyday user. The discussion is incredibly polarizing, with some arguing that students only use AI to cheat, while others see incredible potential and advocate that higher education needs to get on board. EdTech companies are scrambling to embed AI features into their products in an effort to stay relevant. 

As a practitioner of learning design, the question that stands out to me is: Where’s the evidence that any of this is working? Are students’ experiences improved? Does AI bring something effective to the experience, or is it just flashy and new? 

Whenever we put a new technology into a learning experience, we’re adding to the cognitive load for learners. In other words, now learners are responsible not only for the content they are supposed to be mastering, but also for navigating a technology tool and all the prior knowledge that is required to successfully use it. 

There are known gaps in AI awareness and use across demographics, particularly when it comes to gender, which are closely aligned to the gaps in access to resources like reliable internet connections. To responsibly integrate AI, it’s critical to do so in ways that are friendly to our student users to ensure they are learning competencies and skills, not just how to use new technology to get through course content. The assessment team at Labs has seen tools that use AI to create customized learning scenarios that allow learners to apply their skills and demonstrate their capabilities. But how does this technology become part of an effective learning experience? What scaffolds need to be built? What else can we teach and learn as a result of these tools? How do we ensure access to AI-integrated learning experiences is equitable?

The AI race in EdTech is missing a key ingredient: evidence-based, learner-centered design. At WGU Labs, we have a team of learning experience designers (LXDs) who are trying to fill this gap using learning sciences research best practices. One of those practices is Design-Based Research (DBR). Read on to understand why DBR is the right approach for researching and testing AI interventions.

What is DBR? 

Design-Based Research (DBR) is a methodological approach developed in the learning sciences field that focuses on iterative, collaborative, and contextually grounded investigations of learning environments. DBR was developed and popularized by Brown & Collins in the 1990s as a response to limitations in traditional experimental research, which often struggles to capture the complexities of authentic learning settings. The primary advantage of DBR for this work is its emphasis on iterations — designing and refining educational interventions like an AI tool for assessment directly within real-world contexts. This cyclical process allows for continuous adjustments, ensuring that findings are not only theoretically robust but, most importantly, applicable in practice. 

DBR is particularly valuable for in-situ research projects, as it prioritizes the parallel development of theory and practice by directly engaging with learners, educators, and instructional environments. The methodology typically follows a sequence of analysis, design, implementation, and revision, incorporating both qualitative and quantitative methods to assess and refine interventions. DBR enables us to generate actionable insights that inform both educational theory and design. This approach is widely used in learning design to develop and test innovative pedagogical strategies, digital learning tools, and curriculum models, making it the best method for advancing evidence-based educational improvements, such as the ones desperately needed in the AI EdTech race. 

Advantages of DBR 

Brings the learner into the design process early

Often, an EdTech product is fully designed before it’s tested at scale with learners. Using a DBR approach instead brings learners into the design process early, ensuring features are driven by learner insights and feedback in highly relevant contexts. Incorporating those insights early allows us to focus on the features that matter most instead of undoing polished products to bake in impactful features as afterthoughts. 

Balances speed with rigor 

DBR was designed in response to criticism about the limitations of traditional research, particularly concerning speed and the applicability of findings. With its focus on iteration driven by data gathered in situ, DBR allows us to rapidly develop prototypes and fine-tune them as we explore and develop theories about how technology supports learning. This ensures that our designs are grounded in theory as well as practical needs. 

Allows us to consider nuanced definitions of success and impact 

When we design in context and plan for iterations, we’re able to notice other indicators of success along the way that otherwise may be missed opportunities. For example, we can gather insights from learners about what features help them develop trust in AI-enhanced experiences, something that our research has shown us is a key concern for learners when considering how AI impacts their learning. 

At WGU Labs, we’re currently practicing this approach across various projects. We launched the Student Insights Council, which provides us ready access to thousands of learners who are interested in sharing their opinions on the learning experience with our researchers. We begin most projects with student interviews to ensure our problem statements are focused on the right issues. In fact, our assessment research is driven largely by conclusions derived from student interviews about their biggest pain points in the student journey, and now we’re designing a tool that uses AI to specifically address the issues raised during the discovery phase of work, which will soon go back to current students for testing and refinement. It’s these strategic applications of new technology, aligned with student experiences, that make our work successful. 

AI represents a compelling opportunity to design the future of learning that we all dream about: personalized, adaptive, affordable, and effective. But without evidence-based design strategies, we risk building products that don’t deliver the impact we want to bring to life. By bringing learners to the design early in the process, generating design insights alongside theory, and remaining open to the iterations those insights drive us toward, we can design valuable educational tools with rich features grounded in learning science.