What is it about?
Deep learning models today are composed of layers with extreme width and depth, which consume a huge amount of computing power. To stray from the need to rely on such intensive computing resources, a new stream of research on developing lighter and more efficient models has appeared. This paper introduces a novel method of selecting the most significant filters in deep neural networks. We performed model simplification via pruning with Genetic Algorithm (GA) for trained deep networks. Pure GA has a weakness of local tuning and slow convergence, so it is not easy to produce good results for problems with large problem space such as ours. We present new ideas that overcome some of GA’s weaknesses. These include efficient local optimization, as well as reducing the time of evaluation which occupies most of the running time. Additional time could be saved by restricting the filters to preserve using the Gray-Level Co-occurrence Matrix (GLCM) to determine the usefulness of the filters. Ultimately, the saved time could be used to perform more iterations, providing the opportunity to further optimize the performance. The experimental result showed more than 95% of reduction in forward convolution computation with negligible performance degradation.
Featured Image
Read the Original
This page is a summary of: Evolutionary Pruning of Deep Convolutional Networks by a Memetic GA with Sped-Up Local Optimization and GLCM Energy Z-Score, July 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3583133.3590604.
You can read the full text:
Contributors
The following have contributed to this page