What is it about?
The paper addresses a common issue in C programming—typos or keyboard errors (such as stuck keys, deletions, mis-typings, or swapped characters) in keywords that typical compilers or IDEs can't handle. To mitigate this, an innovative fuzzy-automaton model for approximate string matching during lexical analysis is proposed. Fuzziness to lexemes is introduced, allowing the lexical analyzer to recognize and correct “fuzzy tokens” (i.e., near-misses of valid keywords). The algorithms and pseudo-code to compute membership degrees of lexemes, being crisp (exact) or fuzzy, are proposed. The fuzzy system is then trained using a neural network to assess its accuracy
Featured Image
Why is it important?
To automatically detect and correct minor keyword typos via fuzzy matching and neural validation.
Read the Original
This page is a summary of: Statistical analysis of lexemes generated in ‘C’ programming using fuzzy automation, Journal of Intelligent & Fuzzy Systems Applications in Engineering and Technology, January 2024, IOS Press,
DOI: 10.3233/jifs-223021.
You can read the full text:
Contributors
The following have contributed to this page







