Second, we introduce a novel mutual learning strategy for different single-modality ID Embedding method to further learn discriminative representations under different modalities. Albeit simple, extensive experiments show that our method outperforms the state-of-the-art on RegDB and SYSU-MMOI datasets. Source code is publicly available at: http...
a scalable deep learning framework that embeds data modalities into a shared low-dimensional latent space that preserves cell trajectory structures in the original datasets.scDARTis a diagonal integration method for unmatched scRNA-seq and scATAC-seq data, which is considered a more...
Here, we introduce a model named Contrastive-learning of Language Embedding and Biological Features (CLEF) leveraging contrastive learning to integrate PLM representations with supplementary biological features. Biologically information is captured in learned contextualized embeddings to yield meaningful ...
The learning rate is set to 0.0001, with a momentum of 0.99 and a weight decay of 0.0005. Other parameters are set to the defaults in9. Cross-modality generation Evaluation metrics We report results on mean absolute error (MAE), peak signal-to-noise ratio (PSNR), mutual information (MI)...
The learning-based registration methods performs pixel-level and feature-level alignment by directly esti- mating the distortion field between the distorted image and its reference image15,16. Such algorithms for direct estimation of deformation fields, while well suited to unimodal registration ...
Dual Swin-transformer based mutual interactive network for RGB-D salient object detection 2023, Neurocomputing Citation Excerpt : For the model evaluation, we also provide the performance results on SIP [71] and STEREO [77]. In this section, we conduct experiments to compare the performance of ...
这两种自适应视角以部分参数共享的对抗式学习为指导,以利用它们的互惠性(mutual benefits)来减少端到端训练过程中的域转移。我们通过与各种最先进的方法进行比较,验证了方法在非配对MR到CT心脏分割中的适应性。实验结果表明,网络在Dice和ASD值方面都优于其他网络。我们的方法是通用的,可以很容易地扩展到其他无监督...
Unsupervised learningInfrared (IR) polarizationIR polarization-visible image fusionThe fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (...
late fusion. These methods do not fully extract or fuse the cross-modality information. Besides, deep-learning-based rigid registration of cardiac SPECT and CT-derivedμ-maps has not been investigated before. In this paper, we propose a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for ...
Recently, deep learning super-resolution (SR) methods have demonstrated great potential in enhancing the resolution of MRI images; however, most of them did not take the cross-modality and internal priors of MR seriously, which hinders the SR performance. In this paper, we propose a cross-...