We propose a method, Continual Prompted Transformer for Test-Time Training (CPT4), that enhances the Vision Transformer (ViT) model by incorporating shared prompts (small learnable parameters) and a batch normalization module, aiming to mitigate catastrophic forgetting and handle domain shifts ...