To augment the LLM's ability to reason with time series data, we propose Prompt-as-Prefix (PaP), which enriches the input context and directs the transformation of reprogrammed input patches. The transformed time series patches from the LLM are finally projected to obtain the forecasts. Our ...
1.Time-LLM: Time Series Forecasting by Reprogramming Large Language Models 通过重新编程大型语言模型进行时间序列预测 简述:Time-LLM是一种重新编程大型语言模型(LLM)以进行通用时间序列预测的方法,通过将输入的时间序列与文本原型重新编程并使用Prompt-as-Prefix(PaP)来增强LLM对时间序列数据的推理能力。 2.OFA:On...
几篇论文实现代码:《Time-LLM: Time Series Forecasting by Reprogramming Large Language Models》(ICLR 2024) GitHub: github.com/KimMeen/Time-LLM [fig1] 《Test-Time Adaptation with CLIP Reward for Zer...
在论文《A decoder-only foundation model for time-series forecasting》中,谷歌研究人员尝试设计了一个时间序列基础模型,在零样本(zero-shot)任务上取得了不错的效果:论文链接:https://arxiv.org/abs/2310.10688 该研究中,研究者设计了一种用于预测的时间序列基础模型 TimesFM,其在各种公共数据集上的 zero...
时序大模型 Large Time Series Model 异常检测 Anomaly Detection 时序补全 Pre-trained Language Model(PLM),foundation model 分享人介绍 (1)主讲人:刘硕 刘硕,中国科学院计算技术研究所2022级硕博连读生,研究方向为时空数据挖掘,异常检测...
最近,谷歌的一篇论文在 X 等社交媒体平台上引发了一些争议。 这篇论文的标题是「A decoder-only foundation model for time-series forecasting(用于时间序列预测的仅解码器基础模型)」。 简而言之,时间序列预测就是通过分析历史数据的变化趋势和模式,来预测未来的数据变化。这类技术在气象预报、交通流量预测、商业销售...
[ICLR 2024] Official implementation of "Time-LLM: Time Series Forecasting by Reprogramming Large Language Models" - George-DTLab/Time-LLM
Recent work in time series analysis has increasingly focused on adapting pretrained large language models (LLMs) forforecasting (TSF), classification, and anomaly detection. These studies suggest that language models, designed for sequential dependencies in text, could generalize to time series data. ...
Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for time series forecasting are often specialized, nec...
time series domains has been constrained by data sparsity. Recent studies have revealed that large language models (LLMs) possess robust pattern recognition and reasoning abilities over complex sequences of tokens. However, the challenge remains in effectively aligning the modalities of time series ...