PhD forum: Why TanH is a hardware friendly activation function for CNNs
Résumé
Convolutional Neural Networks (CNNs) [1] are the state of the art of image classification that improved accuracy and robustness of machine vision systems at the price of a very high computational cost. This motivated multiple research efforts to investigate the applicability of approximate computing and more particularly, fixed point-arithmetic for CNNs. In all this approaches, a recurrent problem is that the learned parameters in deep fraCNN layers have a significantly lower numerical dynamic range when compared to the feature maps, which prevents from using of a low bit-width representation in deep layers. In this paper, we demonstrate that using the TanH activation function is way to prevent this issue. To support this demonstration, three benchmark CNN models are trained with the TanH function. These models are then quantized using the same bit-width across all the layers. In the case of FPGA based accelerators, this approach infers the minimal amount of logic elements to deploy CNNs. © 2017 Association for Computing Machinery.