Cela supprimera la page "Simon Willison's Weblog"
. Soyez-en sûr.
That design was trained in part utilizing their unreleased R1 "thinking" model. Today they have actually launched R1 itself, along with a whole household of brand-new designs obtained from that base.
There's a lot of stuff in the brand-new release.
DeepSeek-R1-Zero seems the base design. It's over 650GB in size and, like the majority of their other releases, is under a tidy MIT license. DeepSeek alert that "DeepSeek-R1-Zero comes across obstacles such as limitless repeating, bad readability, and language mixing." ... so they likewise released:
DeepSeek-R1-which "incorporates cold-start data before RL" and "attains efficiency comparable to OpenAI-o1 throughout math, code, and thinking jobs". That a person is likewise MIT accredited, and is a similar size.
I do not have the capability to run models larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two models are something I can easily have fun with myself. That's where the brand-new distilled designs are available in.
To support the research study community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense designs distilled from DeepSeek-R1 based upon Llama and Qwen.
This is a fascinating flex! They have actually models based upon Qwen 2.5 (14B, 32B, Math 1.5 B and experienciacortazar.com.ar Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).
Weirdly those Llama designs have an MIT license attached, which I'm uncertain works with the underlying Llama license. Qwen designs are Apache licensed so possibly MIT is OK?
(I also just discovered the MIT license files say "Copyright (c) 2023 DeepSeek" so they may need to pay a little bit more attention to how they copied those in.)
Licensing aside, these distilled models are interesting beasts.
Running DeepSeek-R1-Distill-Llama-8B-GGUF
Quantized versions are currently starting to show up. Up until now I've tried just one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and it's truly enjoyable to have fun with.
I'm running it using the combination of Ollama, LLM and the llm-ollama plugin.
First I fetched and ran the model utilizing Ollama itself:
This downloads around 8.5 GB of model information and starts an interactive chat interface.
Once the model has been fetched, LLM can speak to it as well. I choose utilizing LLM for experiments because it logs everything to SQLite for later exploration.
I installed the plugin and ran it like this:
Each reaction starts with a ... pseudo-XML tag containing the chain of idea used to help create the reaction. For "a joke about a pelican and a walrus who run a tea room together" that started like this:
Okay, so I need to come up with a joke about a pelican and a walrus running a tea space together. Hmm, that's an intriguing combination. Let me think of how these two characters might interact in an amusing scenario.
First, let's consider their characteristics. Pelicans are known for their long beaks and Webbed feet, frequently seen near the beach or water. Walruses have big teeth, thick fur, and they're normally found in colder climates, like icebergs or snowy locations. So, combining these 2 into a tea space setting is already a funny image.
[...]
It then thought for 20 paragraphs before outputting the joke! Here's the complete records and here's the output at the end, after the tag:
The Joke:
A pelican and a walrus choose to open a quaint little tea room together. The walrus, with its big size, struggles to discover comfortable chairs, so it rests on the table by mishap, overturning the teapot. Meanwhile, the pelican, attempting to assist, uses its beak to place saucers on the table, triggering a couple of spills.
After a series of funny incidents, the walrus looks up and says with a grin, "This isn't so fishy any longer." The pelican smirks and remarks, "Maybe not, however we do have a great deal of krill in our tea!"
Explanation:
- Setup: Pelican and Walrus run a not likely tea room. Their sizes cause comical mayhem.
Cela supprimera la page "Simon Willison's Weblog"
. Soyez-en sûr.