Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ OPENAI_API_KEY=
OPENROUTER_API_KEY=
GOOGLE_API_KEY=
XAI_API_KEY=
GROQ_API_KEY=
GROQ_API_KEY=
DEEPSEEK_API_KEY=
164 changes: 164 additions & 0 deletions guides/deepseek.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# DeepSeek

Use DeepSeek AI models through their OpenAI-compatible API.

## Overview

DeepSeek provides powerful language models including:

- **deepseek-chat** - General purpose conversational model
- **deepseek-reasoner** - Reasoning and problem-solving capabilities

## Prerequisites

1. Sign up at https://platform.deepseek.com/
2. Create an API key
3. Add the key to your environment:

```bash
# .env
DEEPSEEK_API_KEY=your-api-key-here
```

## Usage

### Basic Generation

Since DeepSeek models are not yet in the LLMDB catalog, use an inline model spec:

```elixir
# Using inline model spec (recommended)
{:ok, response} = ReqLLM.generate_text(
%{provider: :deepseek, id: "deepseek-chat"},
"Hello, how are you?"
)

# Or normalize first
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-chat"})
{:ok, response} = ReqLLM.generate_text(model, "Hello!")
```

### Code Generation

```elixir
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-reasoner"})

{:ok, response} = ReqLLM.generate_text(
model,
"Write a Python function to calculate fibonacci numbers",
temperature: 0.2,
max_tokens: 2000
)
```

### Streaming

```elixir
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-chat"})

{:ok, stream} = ReqLLM.stream_text(model, "Tell me a story about space exploration")

for chunk <- stream do
IO.write(chunk.text || "")
end
```

### With System Context

```elixir
context = ReqLLM.Context.new([
ReqLLM.Context.system("You are a helpful coding assistant."),
ReqLLM.Context.user("How do I parse JSON in Elixir?")
])

model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-reasoner"})

{:ok, response} = ReqLLM.generate_text(model, context)
```

## Helper Module

For convenience, create a wrapper module:

```elixir
defmodule MyApp.DeepSeek do
def chat(prompt, opts \\ []) do
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-chat"})
ReqLLM.generate_text(model, prompt, opts)
end

def think(prompt, opts \\ []) do
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-reasoner"})
ReqLLM.generate_text(model, prompt, Keyword.merge([temperature: 0.2], opts))
end

def stream_chat(prompt, opts \\ []) do
model = ReqLLM.model!(%{provider: :deepseek, id: "deepseek-chat"})
ReqLLM.stream_text(model, prompt, opts)
end
end

# Usage
MyApp.DeepSeek.chat("Explain quantum computing")
MyApp.DeepSeek.think("Write a React component for a todo list")
```

## Configuration

### Environment Variables

- `DEEPSEEK_API_KEY` - Required. Your DeepSeek API key

### Per-Request API Key

```elixir
ReqLLM.generate_text(
%{provider: :deepseek, id: "deepseek-chat"},
"Hello!",
api_key: "sk-..."
)
```

## Available Models

| Model | Use Case | Context Window |
|-------|----------|----------------|
| `deepseek-chat` | General conversation, Q&A | 64K tokens |
| `deepseek-reasoner` | Complex reasoning tasks | 64K tokens |

Check https://platform.deepseek.com/docs for the latest model information.

## Troubleshooting

### `{:error, :not_found}` when using string spec

DeepSeek models are not yet in the LLMDB registry. Use an inline model spec instead:

```elixir
# ❌ Won't work (model not in LLMDB)
ReqLLM.generate_text("deepseek:deepseek-chat", "Hello!")

# ✅ Works (inline model spec)
ReqLLM.generate_text(
%{provider: :deepseek, id: "deepseek-chat"},
"Hello!"
)
```

### Authentication Errors

- Ensure `DEEPSEEK_API_KEY` is set in your `.env` file
- Check that the API key is valid at https://platform.deepseek.com/

### Rate Limits

DeepSeek API has rate limits. If you encounter rate limiting:
- Implement exponential backoff
- Consider batching requests
- Check your plan limits at https://platform.deepseek.com/

## Resources

- [DeepSeek Platform](https://platform.deepseek.com/)
- [DeepSeek API Documentation](https://platform.deepseek.com/docs)
- [Model Specs Guide](model-specs.md) - For more on inline model specifications
52 changes: 52 additions & 0 deletions lib/req_llm/providers/deepseek.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
defmodule ReqLLM.Providers.Deepseek do
@moduledoc """
DeepSeek AI provider – OpenAI-compatible Chat Completions API.

## Implementation

Uses built-in OpenAI-style encoding/decoding defaults.
DeepSeek is fully OpenAI-compatible, so no custom request/response handling is needed.

## Authentication

Requires a DeepSeek API key from https://platform.deepseek.com/

## Configuration

# Add to .env file (automatically loaded)
DEEPSEEK_API_KEY=your-api-key

## Examples

# Basic usage
ReqLLM.generate_text("deepseek:deepseek-chat", "Hello!")

# With custom parameters
ReqLLM.generate_text("deepseek:deepseek-reasoner", "Write a function",
temperature: 0.2,
max_tokens: 2000
)

# Streaming
ReqLLM.stream_text("deepseek:deepseek-chat", "Tell me a story")
|> Enum.each(&IO.write/1)

## Models

DeepSeek offers several models including:

- `deepseek-chat` - General purpose conversational model
- `deepseek-reasoner` - Reasoning and problem-solving

See https://platform.deepseek.com/docs for full model documentation.
"""

use ReqLLM.Provider,
id: :deepseek,
default_base_url: "https://api.deepseek.com",
default_env_key: "DEEPSEEK_API_KEY"

use ReqLLM.Provider.Defaults

@provider_schema []
end
1 change: 1 addition & 0 deletions mix.exs
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ defmodule ReqLLM.MixProject do
"guides/ollama.md",
"guides/amazon_bedrock.md",
"guides/cerebras.md",
"guides/deepseek.md",
"guides/meta.md",
"guides/zenmux.md",
"guides/zai.md",
Expand Down
Loading
Loading