What is it about?

Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.

Featured Image

Read the Original

This page is a summary of: Optimized hardware framework of MLP with random hidden layers for classification applications, May 2016, SPIE,
DOI: 10.1117/12.2225498.
You can read the full text:

Read

Contributors

The following have contributed to this page