Through pre-training on large-scale corpora, LLMs can effectively improve the performance in massive downstream tasks, including but not limited to clinical notes summarization [12], biomedical natural language tasks [13] and text-to-image generation [14], and LLMs outperform medical experts in ...
With the advancement of pre-trained language models like Transformer and BERT [50], they acquire richer semantic information through pre-training on extensive data to provide more accurate and comprehensive representations for sentences in the support and query sets. Compared to other encoders, the ...