What is it about?
Researchers who study expertise in digital environments regularly run into the same problem: self-reported measures of skill are convenient, but they are also consistently inaccurate. Players overestimate or underestimate their experience, which muddies the conclusions we can draw about what expertise actually looks like in practice. This study takes a different approach. Drawing on the Model of Domain Learning and digital proxemics theory, we developed the Behavioral Observation Matrix-Proxemics (BOM-Proxemics), a structured tool for coding observable in-game behaviors as direct evidence of expertise. Apex Legends was selected as the study environment because of its transparent ranking system, spatial complexity, and the continuous strategic demands it places on players, all of which create visible behavioral differences across skill levels. Using recorded gameplay from 102 players classified as novice, competent, or expert, the BOM-Proxemics demonstrated strong psychometric properties and accurately predicted in-game rank.
Featured Image
Photo by Sasun Bughdaryan on Unsplash
Why is it important?
Educators and education researchers who study expertise development often face a fundamental measurement problem: the tools commonly used to sort people by skill level depend on their own estimates of their ability, which are prone to bias. The Model of Domain Learning offers a principled way to think about expertise as a progression through stages, but practical tools for operationalizing those stages in observable terms have been limited. This study addresses that gap directly. The BOM-Proxemics provides a psychometrically sound, behavior-based method for placing people along the expert-novice continuum in a way that aligns with MDL's three-stage structure. That alignment matters because it opens the door to studying how expertise actually develops and manifests in complex digital environments, not just how learners describe their own experience of it.
Perspectives
My work tends to sit at the place where assessment design and domain learning theory meet game-based environments, and this project is a good example of why that combination is worth pursuing. The Model of Domain Learning gave us a framework that is not just descriptive but actionable: if you can identify where a learner sits across the acclimation, competence, and proficiency stages, you can start asking more precise questions about what supports movement through those stages. What drew me to this work was the chance to build a measurement tool that reflects the actual behavioral demands of a complex domain, rather than asking someone to summarize years of experience in a single survey item. Whether the domain is a video game, a simulation, or a digital learning environment, I think this approach has real potential for education researchers who want to study expertise with more rigor.
Dr Sam Leif
Read the Original
This page is a summary of: Differentiating video game expertise using the Model of Domain Learning, Frontiers in Education, April 2026, Frontiers,
DOI: 10.3389/feduc.2026.1794086.
You can read the full text:
Contributors
The following have contributed to this page







