What is it about?

This algorithm is called the CM algorithm, which is an improved version of the EM algorithm. This algorithm is based on the semantic information theory proposed the author long time ago. This theory includes the R(G) function, which means the minimum Shannon mutual information R=R(G) for given semantic mutual information G. It can be proved that the relative entropy between the sampling distribution and the predicted distribution is equal to R(G)-G. The semantic channel matching the Shannon channel is to maximize G; the Shannon channel matching the semantic channel is to minimize R. So, the iteration can converge.

Featured Image

Why is it important?

The CM algorithm is faster than the EM algorithm and has clearer convergence reasons. The CM algorithm can also be used for tests, estimations, and predictions with maximum likelihood, with higher efficiency and reliability.

Perspectives

The CM algorithm shows how powerful a new semantic information theory defined with log normalized likelihood is. The new semantic information theory and the CM algorithm will be applied to more areas.

Professor Chenguang Lu
Retired

Read the Original

This page is a summary of: Channels’ Matching Algorithm for Mixture Models, January 2017, Springer Science + Business Media,
DOI: 10.1007/978-3-319-68121-4_35.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page