What is it about?

There are not enough trained cybersecurity workers to fill available jobs. One way to close that gap is to make sure university courses are teaching the right skills. But reviewing course syllabi by hand is slow and labor-intensive. We used AI language models to automatically compare what is being taught in 141 cybersecurity courses at top U.S. universities against a standard list of cybersecurity jobs, tasks, and skills published by the U.S. government. This gives educators and program designers a faster, scalable way to spot gaps in their curriculum and see how well their courses line up with what the industry actually needs.

Featured Image

Why is it important?

Most approaches to checking whether university courses are teaching the right skills require experts to read through course materials by hand, which is slow and expensive. We show that AI can do this work automatically and at scale. With cybersecurity jobs going unfilled at record rates, tools that help universities quickly identify and fix gaps in what they are teaching could make a real difference. And because our approach is flexible, it could be applied to other fields facing similar workforce challenges.

Perspectives

This project excites me because it connects two fields that rarely overlap: AI and curriculum design. It shows that powerful tools do not have to stay in research labs, they can solve real, practical problems in education. I hope it encourages educators across disciplines to think more carefully about whether their courses are truly preparing students for the workforce.

Alyssa Kalish
University of Illinois at Urbana-Champaign

Read the Original

This page is a summary of: Using a Language Model to Map Syllabi to Core Competencies, February 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3770761.3777226.
You can read the full text:

Read

Contributors

The following have contributed to this page