Recent findings highlight the remarkable CLT capabilities of Multilingual BERT (mBERT), a model pre-trained on Wikipedia data in 104 languages. Fine-tuning mBERT using annotated samples from the source language has emerged as a mainstream approach for cross-lingual text label prediction [15]. ...