What is it about?

it is an ANN where associations among the nodes do not form a directed cycle and is different from recurrent neural networks . FFNN is an inter-connection of perceptrons in which data and calculations flow in a single direction, from the input data to the outputs. The number of layers of perceptrons is made up of the number of layers in a NN. FFNNs could be used to map any function from input to output and they are known as gradient based learning algorithms (Steepest Decent Method) which is the supreme algorithm used in FFNNs

Featured Image

Why is it important?

FFNN has a comprehensive applicability to real world business issues. In fact, they have already been positively applied in many industries. Since the network is best at recognizing trends in data, that is well suitable for forecasting needs including: industrial process control, sales forecasting, data validation, customer research, risk management, and target marketing. It is also utilized for the following explicit paradigms: interpretation of multi-meaning Chinese words; analysis of hepatitis; recognition of orators in conversations; salvage of telecommunications from defective software; undersea mine detection; texture scrutiny; three-dimensional object recognition; hand-written word recognition; and facial recognition.

Perspectives

This work outperformed prior state of the art works with regards to high convergence rate, and also having the capability of handling more datasets which are computationally less expensive. Naturally the FFNN would have had a saturation problem with the sigmoid function, and for this reason, the training is hardly possible in high-dimension dataset. However, the technique implemented in this solves this problem, since our model does not involve the derivative of sigmoid function.

Dr. Ben-Bright Benuwa
Data Link Institute

Read the Original

This page is a summary of: Taxonomy and a Theoretical Model for Feedforward Neural Networks, International Journal of Computer Applications, April 2017, Foundation of Computer Science,
DOI: 10.5120/ijca2017913513.
You can read the full text:

Read

Contributors

The following have contributed to this page