DeepSeek integrations: third-party toolchain reference

Specific wiring patterns for the tools developers actually use — LangChain, LlamaIndex, Cursor, Continue.dev, n8n, and Zapier — with enough detail to get from zero to a working DeepSeek integration in each environment.

The base URL pattern: the universal starting point

Every DeepSeek integration in a tool that already supports OpenAI follows the same two-step pattern: change the API base URL to the DeepSeek endpoint and replace the API key. That pattern works in every tool listed on this page.

The DeepSeek API is designed to be OpenAI-compatible at the HTTP level. The request and response format for /v1/chat/completions matches the OpenAI specification closely enough that any client written to that spec works with DeepSeek by pointing at a different host. The model names differ — deepseek-chat for V3 and deepseek-reasoner for R1 are the current production model identifiers — but the request shape, the streaming protocol, the function-calling schema, and the error codes are all consistent with what OpenAI clients expect.

This means that for most tools, the integration question is not "does this tool support DeepSeek" but "does this tool allow me to change the base URL for an OpenAI-compatible provider." The answer is yes for every tool covered on this page, and for most tools not listed here as well.

LangChain and LlamaIndex

Both orchestration libraries offer a native DeepSeek integration class and also work through the OpenAI-compatibility path, giving developers two routes with different maintenance trade-offs.

For LangChain, the recommended path for new integrations is the langchain-deepseek package, which provides a ChatDeepSeek class. Install it with pip install langchain-deepseek, set the DEEPSEEK_API_KEY environment variable, and instantiate the class with model="deepseek-chat" or model="deepseek-reasoner". The class implements the full LangChain chat model interface, so it works in chains, agents, and LCEL pipelines without modification. For projects that cannot add the new package, the fallback is ChatOpenAI(base_url="<deepseek-endpoint>", api_key="<your-key>", model="deepseek-chat") — fully functional and commonly used.

For LlamaIndex, the llama-index-llms-deepseek package provides a DeepSeek LLM class. As with LangChain, the OpenAI-compatibility fallback also works: instantiate OpenAI(api_base="<deepseek-endpoint>", api_key="<your-key>", model="deepseek-chat"). Both paths support streaming and async inference. For detailed architecture guidance on RAG patterns using LlamaIndex with DeepSeek, see the ecosystem overview page.

Recap Capsule

The DeepSeek integrations that require the least code change are those built on the OpenAI SDK compatibility layer. If you have a working OpenAI integration today, switching to DeepSeek is: (1) change the base URL to the DeepSeek API endpoint, (2) swap the API key, (3) change the model name to deepseek-chat or deepseek-reasoner. Three changes, no library upgrades, no interface rewrites. Test with a simple non-streaming call first, then enable streaming and function calling once the basics confirm.

Cursor and Continue.dev

Cursor and Continue.dev are the two most widely used AI-native code editor integrations, and both support DeepSeek through their custom OpenAI-compatible provider configuration.

In Cursor, navigate to Settings > Models > Add Model. Set the provider to OpenAI-compatible, enter the DeepSeek API base URL as the base URL field, paste your DeepSeek API key, and enter deepseek-chat or deepseek-reasoner as the model name. Save and select the model in the model picker. Cursor will use DeepSeek for completions, chat, and the Composer feature. Using deepseek-reasoner in Cursor's chat produces noticeably longer and more step-by-step responses for complex refactoring tasks — the R1 chain-of-thought translates well to architectural explanation prompts.

In Continue.dev, add a model block to ~/.continue/config.json: set "provider": "openai", "apiBase": "<deepseek-endpoint>", "apiKey": "<your-key>", and "model": "deepseek-chat". Continue.dev supports multiple model configurations simultaneously, so you can define both a V3 chat model for quick completions and an R1 model for in-depth code review sessions, switching between them from the model picker in the sidebar. For further guidance from web standards bodies, W3C's HTTP documentation covers the protocol that all these REST integrations rely on.

n8n and Zapier

For no-code and low-code automation workflows, n8n and Zapier both support DeepSeek through their OpenAI-compatible HTTP mechanisms.

In n8n, the easiest path is the OpenAI node with the Base URL field set to the DeepSeek API endpoint. Set the credential to a new OpenAI API credential with your DeepSeek key. The OpenAI node's Chat model operation maps directly to the DeepSeek chat-completions endpoint. For more control — custom headers, full request body access — the HTTP Request node with method POST, URL set to the DeepSeek endpoint, and a JSON body containing model, messages, and any sampling parameters works equally well and is more portable across n8n versions.

In Zapier, use the OpenAI integration and configure a custom API key credential pointing to the DeepSeek endpoint via Zapier's advanced API settings, or use the Webhooks by Zapier action with a POST step to call the DeepSeek API directly. The Webhook approach gives full control over the request body and is reliable for any workflow that needs specific sampling parameters, system prompts, or multi-turn conversation context.

DeepSeek integrations: tool, pattern, and implementation notes
Tool Integration pattern Notes
LangChain Native ChatDeepSeek class or ChatOpenAI with base_url override Full chain/agent/LCEL support; function calling works via tools parameter
LlamaIndex Native DeepSeek LLM class or OpenAI with api_base override Works in RAG pipelines, query engines, ReAct agents; streaming supported
Cursor Settings > Models > Add Model with OpenAI-compatible provider config Supports chat, completions, Composer; R1 (deepseek-reasoner) for deep analysis
Continue.dev config.json model block with provider openai and apiBase override Multiple models configurable; switch V3/R1 per task from model picker
n8n OpenAI node with Base URL override, or HTTP Request node for full control Credential: OpenAI API type with DeepSeek key; all sampling params accessible

For the broader ecosystem context — inference runtimes, evaluation harnesses, and fine-tuning toolchains — see the ecosystem overview page. For download and self-hosted inference setup that these integrations can point to instead of the hosted API, see the download reference page. The API page covers the hosted API contract in detail.

Frequently asked questions about DeepSeek integrations

Five common wiring questions from developers integrating DeepSeek into their toolchains.

How do I integrate DeepSeek with LangChain?

Install langchain-deepseek, set DEEPSEEK_API_KEY, and instantiate ChatDeepSeek(model="deepseek-chat"). The alternative — and fully supported — path is ChatOpenAI(base_url="<deepseek-endpoint>", api_key="<your-key>", model="deepseek-chat"). Both paths implement the full LangChain chat model interface and work in chains, agents, and LCEL pipelines without further changes.

How do I use DeepSeek in Cursor or Continue.dev?

In Cursor: Settings > Models > Add Model, select OpenAI-compatible provider, enter the DeepSeek API base URL and your key, set the model to deepseek-chat or deepseek-reasoner. In Continue.dev: add a model block in ~/.continue/config.json with "provider": "openai", "apiBase": "<deepseek-endpoint>", "apiKey": "<your-key>", and "model": "deepseek-chat". Both editors support multiple model configurations so you can run V3 and R1 side-by-side.

Can I use DeepSeek with n8n or Zapier for automation workflows?

Yes. In n8n, use the OpenAI node with the Base URL field pointing to the DeepSeek endpoint, or the HTTP Request node for full control over the request body. In Zapier, configure the OpenAI integration with a custom API key credential pointing to DeepSeek, or use Webhooks by Zapier with a POST action to call the DeepSeek API directly. Both approaches cover the full range of chat-completions parameters.

Does DeepSeek support function calling for agent integrations?

Yes. The DeepSeek API supports function calling following the OpenAI function-calling schema — pass functions in the tools array and inspect tool_calls in the response. This makes DeepSeek a drop-in replacement for GPT-4 function-calling in agent frameworks including LangChain agents, LlamaIndex ReAct agents, and any framework that uses the standard tools interface. Test with a simple single-function call before deploying multi-step agent workflows.

What is the simplest way to add DeepSeek to any OpenAI SDK project?

Three changes: (1) set base_url or OPENAI_API_BASE to the DeepSeek API endpoint, (2) replace the API key with your DeepSeek key, (3) change the model name to deepseek-chat (V3) or deepseek-reasoner (R1). No library upgrade required. Confirm with a simple non-streaming call, then verify streaming and function-calling behaviour before promoting to production. Most OpenAI SDK patterns work identically against the DeepSeek endpoint.