Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data
Paper
•
2503.10883
•
Published
Chat-TS Model Trained off of LLama3.1-7B backbone.
This model discretely tokenizes time-series and uses an expanded vocabulary to model time-series representations. Due to these modifications it should be compatible with most modern inferance frameworks as you can simply pass the multi-modal token stream directly to the model (eg. VLLM)
This model was trained for text generation tasks, however this framework is extensible to time-series generation aswell.
For more information please see the paper below.
If you use this work please cite:
@misc{quinlan2025chattsenhancingmultimodalreasoning,
title={Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data},
author={Paul Quinlan and Qingguo Li and Xiaodan Zhu},
year={2025},
eprint={2503.10883},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2503.10883},
}