Evaluating LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0 for On-Device Scientific Reasoning
This report evaluates LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0, a compact instruction-tuned language model developed by LiquidAI, with a focus on real-world scientific reasoning on edge hardware rather than synthetic benchmarks. The model was tested locally on an older Android smartphone (OnePlus 8), achieving sustained inference speeds of approximately 7–10 tokens per second while supporting a large context window (~120K tokens). Instead of relying on standardized leaderboards, the evaluation employed manually designed tests in linear algebra, differential equations, classical mechanics, and conceptual physics storytelling—each requiring method selection, internal consistency, and independent verification (e.g., cross-checking Newtonian mechanics with energy conservation).
Across these tasks, the model consistently arrived at correct final answers, even when intermediate reasoning paths were non-linear, exploratory, or partially flawed. In mathematical problems, it demonstrated reliable convergence to valid solutions and correct verification against initial conditions or substitutions, despite occasional lapses in derivational rigor. In physics problems, it maintained numerical stability and cross-law consistency, correctly reconciling force-based and energy-based analyses. In narrative physics explanations, it conveyed largely accurate intuition with minor conceptual looseness typical of small models. Notably, during scientific problem solving, the model rendered mathematical expressions as clear, well-structured equations displayed directly on screen, rather than flattened or text-heavy LaTeX-style representations, significantly improving readability and practical usability.
Overall, the results suggest that LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0 functions as a convergent, verification-capable reasoning model rather than a formal symbolic engine. Within its peer class—small, quantized, fully on-device models—the combination of reasoning robustness, large context capacity, high-quality mathematical rendering, and practical inference speed represents a notable achievement. This evaluation supports the view that, when paired with explicit verification, such models are already viable tools for applied scientific reasoning at the edge.
More details of evaluation can be found here
The model dialogue is here
https://fate-stingray-0b3.notion.site/Prompt-and-model-response-2e03b975deec80e09dcfedbd343c4c9d



