You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

GENERator-v2-Eukaryote Gene-Centric Pretraining Corpus

This repository provides the gene-centric pretraining corpus underlying GENERator-v2-Eukaryote, a large-scale DNA language model for eukaryotic genome understanding.

The dataset is constructed by leveraging RefSeq annotations to extract biologically meaningful functional genomic regions, which serve as the foundation for large-context DNA language model pretraining.


⚠️ Data access notice

The complete pre-training dataset for GENERator-v2 will be made fully publicly available upon the acceptance of the corresponding manuscript.

Prior to publication, access to this dataset is provided on a collaborative basis only. Researchers with a legitimate scientific interest are welcome to request access by contacting:

qiuyi.li1993@gmail.com


📌 Dataset Construction Overview

The core design philosophy of this dataset is gene-centric functional sequence modeling.

High-confidence reference annotations (e.g. RefSeq) are used as a scaffold to identify and extract contiguous functional regions from eukaryotic genomes, including protein-coding genes and diverse RNA genes.


🧬 Data Schema

Each row in the dataset corresponds to one functional genomic segment.

Column Type Description
record_id string RefSeq record identifier
taxonomy string Full taxonomic lineage (semicolon-separated)
species_type string High-level species category token
gene_type string Functional gene category token
strand string DNA strand in the reference genome (<+> or <->)
sequence string Extracted functional DNA sequence
start int Start coordinate of the functional region on the RefSeq record
end int End coordinate of the functional region on the RefSeq record

🌍 Species Type Tokens (species_type)

Each sample is annotated with a coarse-grained evolutionary category:

Token Meaning
<prt> Protozoa
<fng> Fungi
<pln> Plant
<inv> Invertebrate
<vrt> Vertebrate (non-mammalian)
<mam> Vertebrate (mammalian)

🧠 Gene Type Tokens (gene_type)

Functional regions are categorized as follows:

Token Description
<cds> Protein-coding gene (gene-centric region, not limited to CDS only)
<pseudo> Pseudogene
<tRNA> Transfer RNA gene
<rRNA> Ribosomal RNA gene
<ncRNA> Non-coding RNA
<misc_RNA> RNA genes not assigned to a specific class

🔁 Strand Orientation

  • <+> denotes the positive strand
  • <-> denotes the negative strand in the reference genome

🔬 Sequence Characteristics

  • Raw DNA sequences (A/C/G/T/N)
  • Uppercase encoding
  • N denotes ambiguous nucleotides
  • No tokenization, masking, or augmentation is applied at this stage

This representation preserves maximum flexibility for downstream preprocessing and modeling strategies.


🚀 Intended Use

This dataset is designed to support:

  • Large-scale DNA language model pretraining
  • Gene-centric functional sequence modeling
  • Cross-species and cross-gene-type representation learning
  • Research in comparative and functional genomics

🧪 Relationship to GENERator-v2-Eukaryote Training

This repository provides raw functional sequence data.

The actual pretraining pipeline of GENERator-v2-Eukaryote applies additional post-processing steps, including:

  • Sequence concatenation and segmentation
  • Tokenization and phase augmentation

These steps are not applied in this dataset and are described in detail in the GENERator-v2 Technical Report (Comming Soon).


🔮 Future Data Releases

The training corpus for GENERator-v2-Prokaryote is currently under active evaluation and optimization.
We plan to release the corresponding prokaryotic pretraining data after thorough validation of data quality and downstream performance.

In addition, the GENERanno series of genome annotation datasets, covering both eukaryotic and prokaryotic genomes at substantially larger scale, will be made publicly available in future releases.

Please stay tuned for updates.


🔗 Related Resources

For more information about the GENERator family of models and ongoing developments, please visit our GitHub repository:

👉 https://github.com/GenerTeam/


📝 Citation

@article {li2026generator2,
    author = {Li, Qiuyi and Zhan, Zhihao and Feng, Shikun and Zhu, Yiheng and He, Yuan and Wu, Wei and Shi, Zhenghang and Wang, Shengjie and Hu, Zongyong and Yang, Zhao and Li, Jiaoyang and Tang, Jian and Liu, Haiguang and Qin, Tao},
    title = {Functional In-Context Learning in Genomic Language Models with Nucleotide-Level Supervision and Genome Compression},
    elocation-id = {2026.01.27.702015},
    year = {2026},
    doi = {10.64898/2026.01.27.702015},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2026/01/29/2026.01.27.702015},
    journal = {bioRxiv}
}

@article{wu2025generator,
  title={GENERator: a long-context generative genomic foundation model},
  author={Wu, Wei and Li, Qiuyi and Li, Mingyang and Fu, Kun and Feng, Fuli and Ye, Jieping and Xiong, Hui and Wang, Zheng},
  journal={arXiv preprint arXiv:2502.07272},
  year={2025}
}
Downloads last month
2,707

Paper for GenerTeam/pretrain_data_eukaryote