Publication not explained
This publication has not yet been explained in plain language by the author(s). However, you can still read the publication.
If you are one of the authors, claim this publication so you can create a plain language summary to help more people find, understand and use it.
Featured Image
Read the Original
This page is a summary of: On the approximation by single hidden layer feedforward neural networks with fixed weights, Neural Networks, December 2017, Elsevier,
DOI: 10.1016/j.neunet.2017.12.007.
You can read the full text:
Resources
SageMath worksheet
Associated SageMath worksheet
arXiv version
Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.
Contributors
The following have contributed to this page