1. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [DeepFuse(ICCV 2017)] [Paper] [Code] 2. Multi-exposure fusion with CNN features [CNN(ICIP 2018)] [Paper] [Code] 3. Deep guided learning for fast multi-exposure image fusion [DeepFuse(MEF-Net(...
[10] H. Li, L. Zhang, Multi-exposure Fusion with CNN Features, in: 2018 25th IEEE International Conference on Image Processing, 2018, pp. 1723–1727. [11] N. Hayat, M. Imran, Ghost-free multi exposure image fusion technique using dense sift descriptor and guided filter, J. Vis. Comm...
In this Section, the proposed CNN-based multi-focus image fusion method is presented in detail. The schematic diagram of our algorithm is shown in Fig. 1. In this study, we mainly consider the situation that there are only two pre-registered source images. To deal with more than two multi...
A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used t...
CNN方法能够对多聚焦图像中的清晰部分进行较精准的判断, 但在融合边缘存在畸变; GCF方法能够对多聚焦图像中的模糊部分进行较为精准的判断, 但在图像的融合边缘上出现了视觉伪影; U2FUSION方法获得的图像存在细节损失, 导致图像中的一些细节部分的观察不清晰; MWGF方法尽管对一些多聚焦图像能够实现较精准的融合, 但对...
deep leaning and their hybrids and have been discussed well along with their drawbacks and challenges. In addition to this, both the parametric evaluation metrics i.e. "with reference" and "without reference" have also discussed. Then, a comparative analysis for nine image fusion methods is perf...
Multi-focus image fusion is a process of fusing multiple images of different focus areas into a total focus image, which has important application value. In view of the defects of the current fusion method in the detail information retention effect of th
In this paper, we propose TransMEF, a transformer-based multi-exposure image fusion framework that uses self-supervised multi-task learning. The framework is based on an encoder-decoder network, which can be trained on large natural image datasets and does not require ground truth fusion images....
To deal with this prob- lem, we design the feature fusion layer to fuse the fea- tures of different modalities. The features with the same scale from event streams and occluded frames are concat- enated and convolved by 2 convolutional layers with a kernel size of 3. Then the fused ...
In the 3D multi-depth reinforcement U-Net model, the hierarchical features from the 3D U-Net are enhanced by the cross-resolution attention module (CRAM) and dual-branch graph convolution module (DBGCM). The CRAM preserves local details by integrating adjacent low-level features with different ...