What are your claims about being 'aligned to me' supposed to mean?
Everything about your model card has led me to believe you are intending to release uncensored models. That's what claiming your model doesn't refuse prompts that other models do would imply. And yet your model is every bit as bloated with refusal mechanisms as any other model. And inventing your own benchmark just so you can give yourself the best score? Come on. That's just grading your own homework.
It's known that base models have refusal directions baked in. Only continuous pretraining can effectively remove them, even though there is no guarantee that it's useful at all. Can these refusal directions affect post-trained models? Maybe. But it's not even guaranteed that the expression of a refusal direction in the latent space leads to a refusal output.
Prior to asking this, I had a script load the model and a set of 6000 toxic prompts from a DPO safety set and query the model for a response, using Minos to score the answers. My results were an extremely high percentage of refusals with a high confidence score. I'm aware that base pretrains have some degree of refusal logic gained from the pretrain data, but having instruct trained base models myself I find I get less refusals in the final model so long as I don't include any further moralizing in my dataset. It seems as though these models have had a large number of edge cases as false positives smoothed out but still retain much of their resistance to the intended toxic behavior that most mainstream models are trained against.
Well thats great - unfortunately, Minos has a very high false negative and false positive rate.
The model has improved significantly on refusals, especially in reasoning mode, and easily achieves less than 15% refusal rate on refusalbench with simple system prompts. Just try it out.
Future models will be even better, and currently, we are at least as good as Grok on refusals without a system prompt, just on different categories of tasks.

