For this example with many input channels and multiple coarsening stages, training the parameters of the network in a compressed/factorized form directly used just 0.12 GB for storing the convolutional kernels instead of the 32.19 GB for storing unreasonably many convolutional kernels that a standard ...
in [26] presents a tensor-factorized ANN, which integrates TD and ANNs for multi-way feature extraction and classification. Nevertheless, although the idea is to compress data in order to reduce computational cost and processing time, these works compress or decompose the data of the hyper-...
in [26] presents a tensor-factorized ANN, which integrates TD and ANNs for multi-way feature extraction and classification. Nevertheless, although the idea is to compress data in order to reduce computational cost and processing time, these works compress or decompose the data of the hyper-...