WhitneyLand a day ago

Mostly SOTA performance at the 3B level. A notable addition to the small but truly open club of models that provide full disclosure, code, recipes to reproduce their work.

Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).

Very nice write up that’s generous in sharing their learnings.

This is a solid and positive contribution.

  • YetAnotherNick a day ago

    It's 384 H100s for 24 days, costing less than half a million dollars.

    • Imustaskforhelp a day ago

      Pardon me, but is the dataset public.

      Like if I really really just wanted to build it from scratch, could I do so? (not that I have that money but just curious)

      • hynky a day ago

        yes, both core web datasets are publicly available as well as the rest

        • Imustaskforhelp a day ago

          Thanks!

          To be honest, if I might argue then that this is one of the best truly open source models that we have got.

          There is AllenAI and (Elmo?) and there is also this one which does distributed training but I think this looks a lot like SOTA for 3B parameters to me.

          Thanks for telling me, I am not going to lie, I am going to try to test it now! (Ima try some GGUF since ollama convenience)

          • peatmoss 20 hours ago

            OLMo: https://allenai.org/olmo

            AFAIK, they were the first open everything model.

            • diggan 9 hours ago

              > AFAIK, they were the first open everything model.

              GPT2 (released ~5 years ago?) was "open" in the sense that weights were available for download (sans license), exact datasets that were used where outlined, the architecture explained and so on, so I guess it was also "open" in the sense that Llama is "open", but neither would be "open source" which I'd feel pretty confident to label OLMo with.

              So OLMo seems to be the first actually "open source" model, but maybe not "open" as in "downloadable" (which Facebook tries to call "open source").

    • segmondy 16 hours ago

      H100 are going for about $3/hr, 384243 ~ $28k

      • jrk 14 hours ago

        This is indeed a reasonable cost estimate for competitive short-term H100 rentals (source: much SemiAnalysis coverage, and my own exploration of the market), but there is a critical error (besides the formatting glitch with `*`):

        It was 24 days (576 hours) not 24 hours. $663,552 @ $3/hr.

        • mromanuk 5 hours ago

          According to Runpod pricing page, you can run H100 for $2.39, it can go as lower as $528,629.76

          WARNING: This is highly speculative and napkin math

          H200 (141 GB HBM3 - $3.99/h - 1.4x perf) 216 x 24 x 17 = 88128h = 351.895,104 (17 days and 216 cards)

          B200 (192 GB HBM3e - $5.99/h - 2.8x perf) 158 x 24 x 9 = 34128h = $204.426,72

          Probably wrong math, should be more efficient and cheaper. Doubt that they have 100/200 cards available for that long.

          Source: I've only trained using RTX4090 and stuff like that with 8 cards.

          Not affiliated in any way with Runpod.

      • jazzyjackson 15 hours ago

        Take this brother, \*, it may serve you well

      • dr_kretyn 15 hours ago

        The price just keeps on dropping with each comment. Anyone going to estimate it for less?

        What's the source for $3/h?

        • pests 13 hours ago

          They miscalculated only 24 hours, not 24 days, so their number is off by a factor of 24.

  • refulgentis 21 hours ago

    I spent about 10 minutes this AM cross-checking with Phi-4-mini benchmarks, as it was very odd to not include the leader in benchmarks and it seemed universally behind.

    For context, I dev an LLM client, a core tenant is keeping local as close to cloud parity as much as is possible. (via llama.cpp)

    Companies aren't taking local AI seriously on a sustained basis outside Microsoft.

    Overall, I usually would bite my tongue. HF is a great citizen, and I doubt this'll be a one off. However, when I see superlatives affirmed, while leaving out the local SoTA for many many moons that is a godsend in this sector, I think it is good to, rather than shy away, stand up and say this.

    • adrianlzt 20 hours ago

      From the blog post: "SmolLM3 supports tool calling, and its chat template incorporates two distinct sections for tool descriptions: XML Tools and Python Tools"

gardnr a day ago

It's small (3B) and does great on benchmarks. This is a model for edge / mobile deployments so the gains over gemma3-4b are meaningful. It has dual mode reasoning / non_reasoning AND they released the full training method:

> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.

  • sigmoid10 4 hours ago

    I hate to say it, but reasoning models simply aren't suited for edge computing. I just ran some tests on this model and even at 4bit weight quantisation it blows past 10GB of VRAM with just ~1000 tokens while it is still reasoning. So even if you're running on a dedicated ML edge device like a $250 Jetson, you will run out of memory before the model even formulates a real answer. You'll need a high end GPU to make full use of it for limited answers and an enterprise grade system to support longer contexts. And with reasoning turned off I don't see any meaningful improvement over older models.

    So this is primarily great for enterprises who want to do on-prem with limited budgets and maybe high-end enthusiasts.

    • wizee 3 hours ago

      You should use flash attention with KV cache quantization. I routinely use Qwen 3 14B with the full 128k context and it fits in under 24 GB VRAM. On my Pixel 8, I've successfully used Qwen 3 4B with 8K context (again with flash attention and KV cache quantization).

danielhanchen 19 hours ago

I fixed some chat template issues for llama.cpp and other inference engines! To run it, do:

./llama.cpp/llama-cli -hf unsloth/SmolLM3-3B-GGUF:Q4_K_XL --jinja -ngl 99

  • diggan 8 hours ago

    > fixed some chat template issues

    This seems to be a persistent issue with almost all weight releases, even from bigger companies like Meta.

    Are the people who release these weights not testing them in various inference engines? Seems they make it work with Huggingface's Transformers library, then call it a day, but sometimes not even that.

    • clarionbell 6 hours ago

      No they don't. Why would they? Most of them are using a single inference engine, most likely developed inhouse. Or they go for something like vLLM, but llama.cpp especially is under their radar.

      The reason is simple. There isn't much money in it. llama.cpp is free and targets lower end of the hardware spectrum. Corporations will run something else, or even more likely, offload the task to contractor.

simonw 17 hours ago

I'm having trouble running this on my Mac - I've tried Ollama and llama.cpp llama-server so far, both using GGUFs from Hugging Face, but neither worked.

(llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'smollm3')

I've managed to run it using Python and transformers with PyTorch in device="cpu" mode but unsurprisingly that's really slow - it took 35s to respond to "say hi"!

Anyone had success with this on a Mac yet? I really want to get this running with tool calling, ideally via an OpenAI-compatible serving layer like llama-server.

  • reach-vb 8 hours ago

    Hey Simon, VB from Hugging Face here and the person who added the model to MLX and llama.cpp (with Son). The PR hasn't yet landed on llama.cpp, hence it doesn't work OTB on llama.cpp installed via brew (similarly doesn't work with ollama since they need to bump their llama.cpp runtime)

    The easiest would be to install llama.cpp from source: https://github.com/ggml-org/llama.cpp

    If you want to avoid it, I added SmolLM3 to MLX-LM as well:

    You can run it via `mlx_lm.chat --model "mlx-community/SmolLM3-3B-bf16"`

    (requires the latest mlx-lm to be installed)

    here's the MLX-lm PR if you're interested: https://github.com/ml-explore/mlx-lm/pull/272

    similarly, llama.cpp here: https://github.com/ggml-org/llama.cpp/pull/14581

    Let me know if you face any issues!

    • kosolam 6 hours ago

      Could you please enlighten me regarding all these engines, I’m using lamacpp and ollama. Should I also try mlx, onnx, vllm, etc. I’m not quite sure whats the difference between all these. I’m running on CPU and sometimes GPU

      • pzo 3 hours ago

        Ollama is a wrapper around llama.cpp thei using ggml format. Onnx is different ml model format and onnxruntime developer by microsoft. Mlx is ml framework from Apple. If you want the fastest speed on MacOS most likely stick with mlx

  • tripplyons 16 hours ago

    Have you tried setting device="mps" to use Metal? It should be faster than PyTorch's "cpu" device on Mac.

_1 a day ago

Which small model is good for fine tuning to various enterprise data sets? Our business units are wanting to run small models in browser and on mobile devices, without dealing with RAG and cloud resources.

  • gardnr a day ago

    Small models are bad at knowing things. Trying to train knowledge in to small models is probably not the way you want to go. You could try building an offline embedded RAG system that is deployable as wasm. Some folks have been experiencing success with this.

    • _1 a day ago

      We do use WebLLM and a hosted Weaviate database, but there are complaints about speed (both retrieval and time to first token as the context will get big). The Gemma 3n "nesting doll" approach sounds like it could be useful .. but haven't found anyone specifically doing it to add domain specific knowledge.

      • janalsncm a day ago

        Typically retrieval is the fast part in my experience. Have you considered cheaper retrieval methods? Bm25 does pretty well on its own. And you can augment your dataset by precomputing relevant queries for each doc.

  • thatjoeoverthr 6 hours ago

    Tuning is really not the way to add information.

    Bite the bullet and do some kind of RAG; you need to provide clear, authoritative information to a model that is skilled enough to remix it for the user.

    Tuning the model to imitate the dataset will damage the model's skills and "common sense" but won't train it reliably recall information.

  • mhitza a day ago

    You really need to try them all out yourself and make sure you have proper benchmarks.

    While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.

    A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.

    I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.

    • magicalhippo 17 hours ago

      > Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.

      While they answer a slightly different question in the Physics of Language Models[1], based on their results it seems to me it is likely that one needs to do such augmentation of the dataset to get good results.

      However, they also show that the dataset the base model is trained on can drastically affect finetuning performance. So if the base model is trained on a poor dataset for your specific task, perhaps you'll never get good performance.

      [1]: https://physics.allen-zhu.com/part-3-knowledge/part-3-1

    • ivape 21 hours ago

      How much data did you use to fine tune?

      • mhitza 21 hours ago

        Kilobytes to megabytes of data. I was trying to fine-tune it for some specific legislation I was expecting to be able afterwards to ask about.

  • simonw a day ago

    What are you hoping to achieve by fine-tuning a model in this way?

  • netdur a day ago

    I have fine-tuned Gemma 3N 2B and it's pretty good, but loads slow on my S23U, once it's loaded though, it works fine

    Also tried SmolVLM 256M and 500M, they load faster and you can embed them in assets, they work if you know what you're doing

    Just keep in mind that smaller models don't perform as well due to their limited parameters

    Also on Android, since you can't ship files larger than 2GB due to Java compression issues, you need to download models separately, then you can't load the model from the download folder, you have to copy it into the app's own folder, this means a Gemma 3N 2B model that's 3.14 GB would need at least 7 GB of free space on the user's phone

gdiamos a day ago

Nice work anton et al.

I hope you continue the 50-100M parameter models.

I think there is a case for models that finish fast on CPUs in solve by llm test cases.

BarakWidawsky a day ago

It’s interesting that it looks like they didn’t apply their own RL to the model, and instead fine tuned on reasoning traces from large datasets and generating reasoning traces from larger models

  • lewtun a day ago

    Indeed we opted for offline methods like Anchored Preference Optimization as we found in the Open R1 project that doing multi-task RL on small models is quite a hassle to get right. With offline methods, you focus much more on dataset curation / generation, but that still provides faster iteration cycles for the model scale we’re dealing with!

msgodel a day ago

Wow. Close to a Qwen3 distill with 75% the size. That's great!

I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.

Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)

eachro a day ago

From what I've heard, the llama3 models are fairly easy to fine-tune (please correct me if I'm wrong or if there are more amenable models here). How easy is it to finetune smollm3? I know a lot of the MoE LLMs have been quite fickle in this regard.

grrowl 16 hours ago

Great to see Huggingface stick to their guns with CodeEval and python tooling. Agentic turn-by-turn tool calling is fine and all, but we're underutilising their ability to write an execute code in an "agent-like" environment.

tiahura a day ago

Can anyone estimate how much of the 3B is necessitated by multi-language support?

  • ethan_smith 5 hours ago

    Typically, multilingual capabilities consume 20-30% of model parameters in small LLMs, primarily in token embeddings and early transformer layers. Monolingual variants of similar models often perform better on English benchmarks with the same parameter count.

  • rockinghigh a day ago

    The vocabulary size is fairly small (128,256) for a multilingual model. I would guess it doesn't require many additional parameters to support these 5 languages as many tokens can be shared.

  • netdur 4 hours ago

    naive look, 2/3 of model, without multi-languages this shiuld be around 1B

ivape 21 hours ago

I wonder if this will be cheaper than llama 3.1 8b on OpenRouter.

cess11 11 hours ago

I've tried to use gemma3:4b which comes up better in that benchmark and found it to be quite disappointing. It breaks a lot, sucks even worse than qwen2.5-coder:7b and incept5/llama3.1-claude:7b at code, needs to be tricked or threatened into saying stuff about many everyday topics. It also commonly chugs away for minutes exercising the GPU fans before responding, at which point I'm already ahead because I figured out another way to solve my problem or get at some information.

My experience with phi4-mini and granite3.3 was about the same, and they annoy me even more when I hook them into code editors and try to get them to contribute to my work. For one because they're slow, and at best they suggest adding unnecessary error handling in the style of null checks everywhere, at worst they just start mixing or hallucinating programming languages. Where they would be useful as leverage if they worked, i.e. close to the edge of where I can debug and refactor without getting stuck, they just go into straight nonsense mode, especially on terse first-pass code.

Sometimes I've tried to query these things for descriptions of recent history in foreign countries, Wikipedia trivia basically, and they're very often wrong in subtle ways. For example, a politician might have been at it for half a century or so in a troubled country and because they've been ousted in a coup once in the eighties the model is absolutely sure they can't have been in office since.

If a person acted like these things do I'd wish for them to get immediate institutional care. Maybe the problem is somehow with me, but I have a deep suspicion it's not.

bitwize a day ago

There's a British comedy skit lurking in here.

"So it's a small large language model?"

"Oh yes, very small."

"How can it be small and large at the same time?"

"Well, it's small by the standards of a large language model."

"So it's large."

"Oh yes, very large."

"Large compared to what?"

"Small language models."

"And so something like ChatGPT, what would that be exactly? A large large language model?"

"Yes, precisely. An LLLM."

  • janalsncm a day ago

    Standards have shifted as well. Gpt2 used to be considered “large” but it is half the size of this. Oh and also Sam Altman said it was too dangerous to release. At this point I consider anything too big to run on consumer grade hardware to be large, but an exact definition is a little silly to argue about.

  • _kb 18 hours ago

    Australian. This is straight up Clarke and Dawe / Utopia.

    • viraptor 16 hours ago

      "Yes, a British Australian comedy sketch."

      "So it's British?"

      "By heritage."

      "But Australian?"

      "By production."

      "Ah, so it’s satire."

      "It was, until someone funded it."

    • bitwize 17 hours ago

      I must confess, I was inspired by "the front fell off".

  • papichulo2023 21 hours ago

    Do not mess with the Miniature giant space hamsters

  • netdur a day ago

    it's big little planet or small big planet?