For many natural signals however, sparse coding is applied to sub-elements ( i.e. patches) of the signal, where such an assumption is invalid. Convolutional sparse coding explicitly models local interactions through the convolution operator, however the resulting optimization problem is considerably ...
Fast Convolutional Sparse Coding in the Dual Domain Convolutional sparse coding (CSC) is an important building block of many computer vision applications ranging from image and video compression to deep learning. We present two contributions to the state of the art in CSC. First, we significantly ...
Sˇ roubek, "Fast convolutional sparse coding using matrix inversion lemma," Dig- ital Signal Processing, vol. 55, pp. 44-51, 2016. doi:10.1016/j.dsp.2016.04.012M. Sorel and F. Sroubek, "Fast convolutional sparse coding using matrix inversion lemma," Digital Signal Processing, vol. 55...
Sparse representation for robust abnormality detection in crowded scenes Pattern Recognition (2014) Z.Zhanget al. Depth-based subgraph convolutional auto-encoder for network representation learning Pattern Recognition (2019) Y.Yuanet al. Structured dictionary learning for abnormal event detection in crowded...
[8] An accurate and robust approach of device-free localization with convolutional autoencoder. (published in IEEE Internet of Things Journal 6.3:5825-5840, 2019). [9] Accounting for part pose estimation uncertainties during trajectory generation for part pick-up using mobile manipulators. (published...
which enhances the network learning ability and further improves the feature extraction ability of concealed objects in images. ResNet-50 introduces a residual module in the convolutional layer, which solves the problem of training degradation caused by the deepening of the network. The network has ...
The lightweight Convolutional Neural Network MobileNetV3 with 15 bneck layers is employed as the main model in this research. Pre-trained weights from the ImageNet dataset are introduced, and the parameters of the Bneck layer are frozen. The output classes of Softmax layer are replaced with ...
《Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding》 《Compressing Deep Convolutional Networks using Vector Quantization》 《And the Bit Goes Down: Revisiting the Quantization of Neural Networks》 ...
Noise2Self-CNN: Noise2Self denoising via Convolutional Neural Networks (CNN). This is the original approach of Noise2Self. In our experience this is typically slower to train, and more prone to hallucination and residual noise than FGR. ...
correlated with their labels (Shen et al.,2015). Recent works on representation learning using deep neural networks have shown practical values in various tasks, which motivates a surge of works to utilize convolutional neural networks as hash functions; see, for example, Çakir et al. (2018...