Spatial-Temporal Large Language Model for Traffic Prediction 交通预测,预测交通流量 新增embedding层仅用TS层 冻结前F层的LLM的所有参数,全量调F层后面的LLM的多头注意力层的参数 LLM的输出接入回归卷积层 8.路线(3) LLM4TS:Two-Stage Fine-Tuning for Time-Series Forecasting with Pre-Trained LLMs 知乎介绍h...
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting ...
To use pretrained LLMs for continuous time series prediction applications, researchers from present the very straightforward approach LLMTIME2, which is high level depicted in Figure 1. This technique, which considers time series forecast...
Time series~(TS) modeling is essential in dynamic systems like weather prediction and anomaly detection. Recent studies utilize Large Language Models (LLMs) for TS modeling, leveraging their powerful pattern recognition capabilities. These methods primarily position LLMs as the predictive backbone, often...
While xLSTM is apparently efficient enough to be a serious competitor to Transformer architecture, it might not be enough to overcome the inertia Transformers have for LLMs. xLSTM could find a niche in Time Series Prediction or physical systems models, should it prove to be most efficient there...
EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction 针对高分辨率密集预测任务,本工作提出了一种多尺度的线性注意力机制,相比于传统的注意力计算方法,KV只需要计算一次,就可以实现重复利用,从而实现O(N)的计算复杂度和空间复杂度。同时为了解决线性注意力机制在局部信息获取能力弱的问题,引...
Shao Z., Zhang Z., Wang F..Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting[C]//Proceedings of the. 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining: ...
2.Lag-Llama: Towards Foundation Models for Time Series Forecasting 迈向时间序列预测的基础模型 简述:为了建立时间序列预测的基础模型并研究它们的缩放行为,论文介绍了Lag-Llama的工作进展。 Lag-Llama是一种通用的单变量概率时间序列预测模型,在大量时间序列数据集上进行训练。该模型在未见过“分布外”时间序列数据集...
Prediction task Tabular Data TABLET: Learning From Instructions For Tabular Data[code] Language models are weak learners LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks [code] TabLLM: Few-shot Classification of Tabular Data with Large Language Models ...
2024-09-05 Interpretable mixture of experts for time series prediction under recurrent and non-recurrent conditions Zemian Ke et.al. 2409.03282 null 2024-09-05 ChartMoE: Mixture of Expert Connector for Advanced Chart Understanding Zhengzhuo Xu et.al. 2409.03277 null 2024-09-05 xLAM: A Family ...