What is it about?
This article presents a computational treatment of argument‑structure constructions within FunGramKB, a multipurpose lexico‑conceptual knowledge base designed for Natural Language Processing (NLP). The authors focus on three closely related English constructions—the resultative, the caused‑motion, and the way construction—and show how these can be represented formally using Attribute‑Value Matrices (AVMs) inside FunGramKB’s Grammaticon. The paper explains why broad, highly abstract constructional descriptions (such as Goldberg’s general schemas) are insufficient for computational implementation: machines need fine‑grained, construction‑specific sub‑types in order to correctly match input sentences to constructional patterns. Through detailed analyses, the authors demonstrate how FunGramKB integrates lexical information with constructional schemas via constraint‑based unification, how semantic interpretation is captured using the COREL metalanguage, and how construction behavior sometimes requires “splitting” a general construction into multiple sub‑constructions to handle variability (e.g., when a construction contributes only a result phrase vs. both an object and a result). Overall, the article combines insights from Cognitive Linguistics, Construction Grammar, Role and Reference Grammar, and knowledge engineering to model how constructions can be made machine‑tractable.
Featured Image
Photo by Ling App on Unsplash
Why is it important?
This work is important because it bridges Cognitive Construction Grammar and computational linguistics—two fields that rarely meet in such depth. The article shows concretely what must change when theoretical constructions are implemented in NLP systems: highly abstract or underspecified schemas do not work for a machine, which requires precise descriptors, constraints, and explicit modeling of each constructional option. By demonstrating how FunGramKB handles lexical‑constructional integration, the paper offers a blueprint for computationally managing variability, metaphor‑based reinterpretations, and construction‑driven argument structure. It also highlights the limitations of relying solely on syntactic alternations or broad Goldbergian patterns, advocating instead for constructional sub‑type differentiation, similar to Boas’s “mini‑construction” approach. This contribution advances both theory and practice: it refines our understanding of construction behavior and provides a path toward robust natural‑language understanding grounded in linguistic theory rather than probabilistic shortcuts.
Perspectives
Writing this article offered an opportunity to bring together several strands of research—Construction Grammar, the Lexical Constructional Model, Role and Reference Grammar, and NLP engineering. I particularly valued showing how constructional meaning can be made computationally explicit, and how theoretical insights into lexical‑constructional fusion (including metaphor‑based reinterpretation) can guide algorithmic design. It was also intellectually rewarding to demonstrate that robust NLP requires more than statistical methods: it requires linguistically principled modeling, sub‑constructional detail, and a clear understanding of how meaning is built through form. FunGramKB provided an ideal environment to test these ideas, revealing both the promise and the challenges of implementing rich constructional knowledge in computational systems
Professor Francisco J. Ruiz de Mendoza
University of La Rioja
Read the Original
This page is a summary of: Argument structure constructions in a Natural Language Processing environment, Language Sciences, March 2015, Elsevier,
DOI: 10.1016/j.langsci.2015.01.001.
You can read the full text:
Contributors
The following have contributed to this page







