YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Chat-TS Model Trained off of LLama3.1-7B backbone.

This model discretely tokenizes time-series and uses an expanded vocabulary to model time-series representations. Due to these modifications it should be compatible with most modern inferance frameworks as you can simply pass the multi-modal token stream directly to the model (eg. VLLM)

This model was trained for text generation tasks, however this framework is extensible to time-series generation aswell.

For more information please see the paper below.

If you use this work please cite:

@misc{quinlan2025chattsenhancingmultimodalreasoning,
      title={Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data}, 
      author={Paul Quinlan and Qingguo Li and Xiaodan Zhu},
      year={2025},
      eprint={2503.10883},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2503.10883}, 
}
Downloads last month
5
Safetensors
Model size
7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for PaulQ1/Chat_TS