What is it about?

In recent years, the research of artificial neural networks based on fractional calculus has attracted much attention. In this paper, we proposed a fractional-order deep backpropagation (BP) neural network model with L_2 regularization. The proposed network was optimized by the fractional gradient descent method with Caputo derivative. We also illustrated the necessary conditions for the convergence of the proposed network. The influence of L_2 regularization on the convergence was analyzed with the fractional order variational method. The experiments have been performed on the MNIST dataset to demonstrate that the proposed network was deterministically convergent and can effectively avoid over-fitting.

Featured Image

Why is it important?

In this paper, We proposed a fractional-order deep backpropagation (BP) neural networks model with L2 regularization. This model overcomes three problems of the previous fractional BP neural network: the number of layers is limited to three; the fractional order is limited to (0, 1) interval; and there are serious over-fitting problems.

Perspectives

In this paper, we applied fractional calculus and regularization method to deep BP neural networks. Different from previous studies, the proposed model had no limitations on the number of layers and the fractional-order was extended to arbitrary real number bigger than 0. L_2 regularization was also imposed into the errorless function. Meanwhile, we analyzed the profits introduced by the L_2 regularization on the convergence of this proposed fractional-order BP network. The numerical results support that the fractional-order BP neural networks with L_2 regularization are deterministically convergent and can effectively avoid the overfitting phenomenon. Then, how to apply fractional calculus to other more complex artificial neural networks is an attracted topic in our future work.

Chunhui Bao

Read the Original

This page is a summary of: Fractional-Order Deep Backpropagation Neural Network, Computational Intelligence and Neuroscience, July 2018, Hindawi Publishing Corporation,
DOI: 10.1155/2018/7361628.
You can read the full text:

Read
Open access logo

Resources

Contributors

The following have contributed to this page