Skip to main content

Command Palette

Search for a command to run...

DeepSeek R1

A Powerful Reasoning Model You Can Use Today

Updated
2 min read
DeepSeek R1

Looking for a robust AI reasoning model? DeepSeek R1 has been making waves, and for good reason. This open-source model rivals the performance of top contenders like OpenAI, and you can access it for free!
Link to the YouTube video explaining this is here: https://youtu.be/sDKeYbybxKA

What is DeepSeek R1?

DeepSeek R1 is a specialised model designed for complex tasks that require in-depth reasoning. Think of it as a deep thinker in the AI world. It's particularly good at problem-solving and coding tasks, often outperforming other models in benchmarks.

How to Access DeepSeek R1

  • Web App: The easiest way to try DeepSeek is through their web application at chat.deepseek.com. You can use the V3 model for general tasks and switch to the R1 model for tasks requiring more thinking. Note: While the chat interface is free, API access might involve fees.

  • Ollama: For more advanced users, DeepSeek R1 can be run locally using Ollama. Visit ollama.com, download the Ollama software for your operating system (macOS, Linux, or Windows), and then select the desired DeepSeek R1 model size from the available options (1.5B, 7B, 14B, 32B, 70B, or 67B parameters). Each model has different RAM requirements, so choose one that's appropriate for your system.

Using DeepSeek R1 with Other Models

A recommended workflow is to use DeepSeek R1 first to plan or strategise the solution to a complex problem. Then, feed the output from DeepSeek R1 to a general-purpose large language model (LLM) like Claude or GPT to execute the plan. This approach can be more cost-effective than relying solely on more expensive LLMs for the entire process.

Cost Comparison

  • OpenAI models (e.g., GPT) can cost $10-16 per million output tokens.

  • Claude models can cost $15 per million output tokens.

  • DeepSeek R1, accessed via their API or OpenRouter, is significantly more affordable, costing around $2 per million output tokens. The free chat interface makes it even more attractive for experimentation.

I'd like to point out that pricing and features can change, so please always refer to the official sources for the most up-to-date information.

  • DeepSeek Web App: chat.deepseek.com

  • DeepSeek Website (for API info, etc.): deepseek.com (Note: The API Platform link shown in the video resulted in a 404 error, suggesting a potential temporary issue.)

  • Hugging Face DeepSeek R1 Repository: huggingface.co/deepseek/DeepSeek-R1 (For downloading the model directly, though this is intended for more technical users).

  • OpenRouter (for accessing DeepSeek R1 and other LLMs): openrouter.ai

  • Ollama (for running LLMs locally): ollama.com

  • Arena (LLM Leaderboard): arena.lmsys.org (This was used for comparing model performance)