Return to blog

Open-Source SLMs

21
Feb
2025
-
5
min read

Open-Source SLMs with Advanced Reasoning & Super Speed: Meet Arcee-Maestro-7B-Preview and Arcee-Blitz

Today we bring you exciting updates on two small language models (SLMs) we've been working on: our first reasoning model, Arcee-Maestro-7B-Preview, and a fast and efficient Mistral-based DeepSeek distillation we call Arcee-Blitz.

Lucas Atkins
,

At Arcee AI, our core focus is and always has been to develop practical AI models that deliver strong performance without requiring excessive computational resources. Today, in that spirit, we’re sharing two updates:

• Arcee-Maestro-7B-Preview – Our first reasoning model trained with reinforcement learning.

• Arcee-Blitz - A Mistral-based 24B model that outperforms its parent on multiple benchmarks, with significantly improved world knowledge.


Arcee-Maestro-7B-Preview

Arcee-Maestro-7B-Preview is our latest 7B-parameter reasoning model. It is built on top of the Qwen-7B distillation from DeepSeek-R1—specifically, DeepSeek-R1-Distill-Qwen-7B—and then further trained through a GRPO run. While it doesn’t yet incorporate the new R1 distillations we’ve been preparing, it already shows promising improvements in mathematical and coding tasks.


Key Training Steps

  • Base Distillation: Started from a Qwen-7B model distillation of DeepSeek-R1.
  • Reinforcement Learning: Added a continued GRPO run over 450,000 verified math problems, plus bootstrapped coding examples.
  • Performance Gains: Outperforms o1-Preview on math queries and nearly matches the 14B distill on most benchmarks. On Math-500 and other math-focused tasks, it approaches the performance of some 32B distillations.


Benchmarks

Arcee-Maestro-7B-Preview stacks up vs. leading reasoning models.

Looking Ahead

We’re continuing to refine Arcee-Maestro-7B-Preview, alongside its larger siblings, to strengthen our reinforcement learning pipelines. Stay tuned for additional releases and improved weights in the coming weeks, especially once our R1 distillations are fully integrated.


Arcee-Blitz

Arcee-Blitz is a new Mistral-based 24B model distilled from DeepSeek, designed to be both fast and efficient. We view it as a practical “workhorse” model that can tackle a range of tasks without the overhead of larger architectures.


Distillation Pipeline

  • Cross-Tokenizer Distillation: We merged the Virtuoso pipeline with a Mistral architecture, hotstarting the training with an extra 3–5B tokens of pretraining distillation from DeepSeek-V3 logits.
  • Fine-Tuning and Post-Training: After capturing core logits, we performed additional fine-tuning and distillation steps to enhance overall performance.

Improved World Knowledge

  • Enhanced MMLU-Pro Results: Arcee-Blitz shows markedly better performance on MMLU-Pro than the original Mistral Small 3, reflecting dramatically improved world knowledge.
  • Triple-Checked Data: We’ve carefully examined our training data and pipelines to avoid contamination. While we’re confident in the validity of these gains, we remain open to further community validation and testing (one of the key reasons we release these models as open-source).

Benchmark Highlights

Arcee-Blitz vs. Mistral Small 3 on key benchmarks.

How to Get Started with Arcee-Maestro-7B-Preview, Arcee-Blitz

Both Arcee-Maestro-7B-Preview and Arcee-Blitz are available now. If you need a robust 7B model for math and reasoning, or a 24B model optimized for speed and versatility, these releases may fit your workflow.

We encourage you to:

  • Download the Models: They’ll be on Hugging Face and integrated into Arcee AI's inference platform soon.
  • Test & Provide Feedback: Let us know how these models perform on your tasks.
  • Stay Tuned: We’re continuing to refine our reinforcement pipelines, and we have more distillations, including R1-based ones, in progress.

If you have questions or want to learn more about how Arcee AI can support your organization’s AI strategy, email us at sales@arcee.ai or book a demo with our team.

We look forward to hearing about your experience with Arcee-Maestro-7B-Preview and Arcee-Blitz, and we’ll keep you updated as our next generation of models rolls out.

Give Arcee a Try

Lorem ipsum dolor sit amet consectetur. Vitae enim libero lectus urna blandit sapien. In egestas ac dolor dictum.
Book a Demo

Sign up for the Arcee AI newsletter

Subscribe to get the latest news and insights on SLM-powered AI agents

Thank you!

We will get back
to you soon.
Oops! Something went wrong while submitting the form.