uqer1244/MLX-z-image

https://github.com/uqer1244/MLX_z-image

This is a 4-bit quantized MLX version of {args.original_repo_id}. It is optimized for Apple Silicon (macOS) using the MLX framework.

Model Details

  • Transformer: MLX 4-bit quantized
  • Text Encoder: MLX 4-bit quantized (Qwen3)
  • VAE: Original PyTorch Model (Sourced from original repo)
  • Tokenizer: Original Qwen2 Tokenizer (Sourced from original repo)
  • Scheduler: MLXFlowMatchEulerScheduler

Usage

This model can be used with the custom MLX pipeline script. Please refer to the original repository for detailed usage instructions regarding the model architecture.

Attribution & License

This model is a derivative work of {args.original_repo_id}.

  • Original License: Apache 2.0
  • Modifications: Converted Transformer and Text Encoder weights to MLX format and quantized to 4-bit.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for uqer1244/MLX-z-image

Finetuned
(62)
this model