LFM2-8B-A1B-GGUF
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of our first MoE based on LFM2, with 8.3B total parameters and 1.5B active parameters.
- LFM2-8B-A1B is the best on-device MoE in terms of both quality (comparable to 3-4B dense models) and speed (faster than Qwen3-1.7B).
- Code and knowledge capabilities are significantly improved compared to LFM2-2.6B.
- Quantized variants fit comfortably on high-end phones, tablets, and laptops.
Find more information about LFM2-8B-A1B in our blog post.
π How to run LFM2
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2-8B-A1B-GGUF
- Downloads last month
- 12,326
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
16-bit