Model Description

This model is a fine-tuned version of Qwen/Qwen3-8B using the Unsloth library and LoRA for parameter-efficient training.

The model is trained on the following dataset:

-OpenMed/Medical-Reasoning-SFT-GPT-OSS-120B - for enhancing medical reasoning skills

Model Details

  • Developed by: Claudio Giorgio Giancaterino
  • Language(s) (NLP): English
  • License: Apache 2.0

Uses

Direct Use

This model can be used as support in healthcare applications, medical research, and clinical text generation.

Downstream Use

It can be integrated into educational chatbots for medical reasoning conversations.

Out-of-Scope Use

It is not suitable for high-level decision-making.

Bias, Risks, and Limitations

Conversational quality may degrade with complex or multi-turn inputs.

How to Get Started with the Model

Use the code below to get started with the model.

-Using the pipeline:

# Use a pipeline as a high-level helper
from transformers import pipeline
import re

pipe = pipeline("text-generation", model="towardsinnovationlab/Qwen3-8B-medical")

messages = [
    {"role": "user", "content": "What are the main symptoms of heart disease?
    Please provide your answer in bullet points."},
]
result = pipe(messages)
# Extract only the assistant's response
assistant_response = result[0]['generated_text'][-1]['content']
# Remove the <think> tags and their content
clean_response = re.sub(r'<think>.*?</think>', '',
assistant_response, flags=re.DOTALL).strip()
print(clean_response)

-Loading the model:

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
import re

tokenizer = AutoTokenizer.from_pretrained("towardsinnovationlab/Qwen3-8B-medical")
model = AutoModelForCausalLM.from_pretrained("towardsinnovationlab/Qwen3-8B-medical")
messages = [
    {"role": "user", "content": "What are the main symptoms of heart disease?
    Please provide your answer in bullet points."},
]

inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(
    **inputs, 
    max_new_tokens=512, 
    temperature=0.7, 
    top_p=0.8, 
    top_k=20,
    do_sample=True  
)

# Extract assistant's response
assistant_response = tokenizer.decode(
    outputs[0][inputs["input_ids"].shape[-1]:], 
    skip_special_tokens=True
)

# Remove <think> tags and their content
clean_response = re.sub(r'<think>.*?</think>', '', assistant_response,
flags=re.DOTALL).strip()

print(clean_response)

Training Details

Training Data

-OpenMed/Medical-Reasoning-SFT-GPT-OSS-120B with 200,193 synthetic medical conversations.

Training Procedure

-Colab with NVIDIA A100 GPU

-per_device_train_batch_size = 4,

-gradient_accumulation_steps = 8,

-warmup_steps = 5,

-max_steps = 30,

-learning_rate = 2e-5,

-logging_steps = 100,

-save_steps=500,

-optim = "adamw_torch",

-weight_decay = 0.001,

-lr_scheduler_type = "linear"

Results

Loss Value >> 1.1232

Citation

@misc{towardsinnovationlab_2025,
    author       = { Claudio Giorgio Giancaterino },
    title        = { Qwen3-8B-medical },
    year         = 2025,
    url          = { https://huggingface.co/towardsinnovationlab/Qwen3-8B-medical },
    publisher    = { Hugging Face }
}

Framework versions

  • PEFT 0.18.0
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for towardsinnovationlab/Qwen3-8B-medical

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
unsloth/Qwen3-8B
Adapter
(24)
this model

Dataset used to train towardsinnovationlab/Qwen3-8B-medical