Qwen3-Neurotic-Experts-8x0.6b-v2

Qwen3-Neurotic-Experts-8x0.6b-v2 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: Qwen/Qwen3-0.6B
dtype: bfloat16
gate_mode: hidden
experts_per_token: 2

experts:
  - source_model: Qwen/Qwen3-0.6B
    # General Chat / multilingual expert
    positive_prompts:
      - "chat"
      - "conversation"
      - "dialogue"
      - "personal assistant"
      - "friendly discussion"
      - "social interaction"
      - "answer in the same language as the user"
    negative_prompts:
      - "code"
      - "mathematics"
      - "medical"
      - "diagnosis"
      - "psychology"
      - "follow instructions"

  - source_model: suayptalha/Qwen3-0.6B-Code-Expert
    positive_prompts:
      - "code"
      - "programming"
      - "python"
      - "javascript"
      - "c++"
      - "debug"
      - "write a function"
      - "implement algorithm"
    negative_prompts:
      - "chat"
      - "medical"
      - "psychology"
      - "diagnosis"
      - "math"

  - source_model: suayptalha/Qwen3-0.6B-Medical-Expert
    positive_prompts:
      - "medical"
      - "health"
      - "treatment"
      - "pharmacology"
      - "clinical advice"
      - "anatomy"
      - "physiology"
    negative_prompts:
      - "chat"
      - "code"
      - "mathematics"
      - "psychology"
      - "instructions"

  - source_model: suayptalha/Qwen3-0.6B-Math-Expert
    positive_prompts:
      - "math"
      - "mathematics"
      - "algebra"
      - "calculus"
      - "geometry"
      - "equation"
      - "proof"
      - "derivation"
    negative_prompts:
      - "chat"
      - "code"
      - "medical"
      - "psychology"
      - "diagnosis"

  - source_model: suayptalha/Qwen3-0.6B-Diagnose
    positive_prompts:
      - "diagnose"
      - "symptoms"
      - "differential diagnosis"
      - "case study"
      - "clinical reasoning"
    negative_prompts:
      - "chat"
      - "code"
      - "mathematics"
      - "psychology"

  - source_model: suayptalha/Qwen3-0.6B-Psychological-Support
    positive_prompts:
      - "mental health"
      - "emotional support"
      - "therapy"
      - "stress management"
      - "coping strategies"
      - "depression"
      - "anxiety"
    negative_prompts:
      - "chat"
      - "code"
      - "mathematics"
      - "medical"
      - "diagnosis"

  - source_model: suayptalha/Qwen3-0.6B-IF-Expert
    positive_prompts:
      - "follow instructions"
      - "step by step"
      - "task execution"
      - "procedures"
      - "guidelines"
      - "do the following"
    negative_prompts:
      - "chat"
      - "code"
      - "medical"
      - "psychology"
      - "mathematics"

  - source_model: 90dkn0ws/OpenR1-Distill-0.6B
    positive_prompts:
      - "reasoning"
      - "logic"
      - "think carefully"
      - "analyze"
      - "explain why"
      - "step by step reasoning"
    negative_prompts:
      - "chat"
      - "code"
      - "medical"
      - "psychology"
      - "instructions"
      - "mathematics"

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "ItsVictorTube/Qwen3-Neurotic-Experts-8x0.6b-v2"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
28
Safetensors
Model size
3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ItsVictorTube/Qwen3-Neurotic-Experts-8x0.6b-v2