Access DeepSeek V3.1 in Trae: Complete Setup and Integration Guide

Access DeepSeek V3.1 in Trae

DeepSeek V3.1 is a state-of-the-art open-source Mixture-of-Experts model (671B parameters, 37B activated, 128K context) that supports both thinking and non-thinking modes. Trae is an AI-powered IDE that lets developers plug in models via API.

This guide walks through integrating DeepSeek V3.1 into Trae: from system requirements and a compatibility comparison, to step-by-step setup, usage best practices, troubleshooting, and security considerations.

What Trae + DeepSeek V3.1 Enable Together

1. AI-Powered Coding Within Your Editor

Inside your Trae IDE, DeepSeek V3.1 becomes an intelligent coding companion. It can generate, refactor, explain, or debug code—all without leaving your editor. Trae’s “agent mode” or builder interface lets DeepSeek V3.1 operate within Task flows, intelligently handling multi-step coding tasks or reasoning workflows via tool-calling and agent-style logic.

1. AI-Powered Coding Within Your Editor

2.Smarter Tool & Agent Workflows

With enhanced tool-calling and search-agent capabilities, DeepSeek V3.1 can integrate with functions like formatters, linters, or even external tools. Using Trae’s multimodal capabilities, they simply provide design mockups and watch as the system automatically translates visual designs into production-ready code. In Trae’s agent mode, this means the model can trigger actions (e.g., run tests, search documentation) as part of a structured workflow.

2.Smarter Tool & Agent Workflows

Key Data Points — DeepSeek V3.1

  • Reasoning & Coding Accuracy
    • AIME 2025 benchmark: 88.4% → close to GPT-5 on math reasoning tasks
  • Context Length
    • Supports 128K tokens → can handle large codebases, documents, and long conversations in a single run
  • Performance & Cost
    • Open-source MIT License
    • Improved inference efficiency with lower operating cost compared to closed-source models
  • Tool Use
    • Stronger structured tool calling and plugin workflows

DeepSeek V3.1 VS Other Similar Models in Trae

FeatureDeepSeek V3.1Kimi-K2Qwen3-Coder
Model Type (Params)MoE: 671B total (37B active)MoE: 1,000B total (32B active)MoE: 480B total (35B active)
Context Window128K tokens128K tokens262K tokens (native)
Special ModesSupports both “thinking” (<think>) and non-thinking modesPrimarily non-thinking (agentic workflows)Non-thinking only (coding-focused)
Primary FocusGeneral reasoning, QA, coding, tool useAgentic tasks, coding/debuggingAdvanced coding and agentic code tasks
Trae IntegrationProvider: DeepSeek (API key) / Hugging FaceProvider: Novita (via Moonshot)Typically via Hugging Face or custom API
API SupportThird-party service provider (e.g., Novita AI)Trae or Third-party service provider (e.g., Novita AI)Third-party service provider (e.g., Novita AI)

Kimi-K2 leads in coding/debugging, Qwen3-Coder stands out with long context and coding specialization, while DeepSeek V3.1 offers the most flexible balance of reasoning and code-related tasks.

Alternative of Trae: Claude Code and Qwen Coder

Tool / ModelStrengthsTrade-offs / Notes
Trae (IDE)Free, integrated IDE with AI-powered featuresTelemetry and data tracking concerns
Claude CodeHigh accuracy, long context, polished outputs, agentic CLIHigher cost, proprietary model (The model must support Anthropic API)
Qwen CoderOpen-source, large context, cost-effective, local hostingSlower than Claude, requires infrastructure for high tokens

How to Access DeepSeek V3.1 within Trae?

The Trae IDE has never supported locally deployed large language models. It only supports models accessed via API calls.

If using DeepSeek V3.1 API (rather than local inference), Trae only needs internet access and a API Provider key.

Prerequisites: Get API Key

Novita AI provides GPT-OSS 120B
APIs with 131K context and costs of $0.1/input and $0.5/output. Novita AI also provides GPT-OSS 20B with 131 context and costs of $0.05/input and $0.2/output ,delivering strong support for maximizing GPT OSS’s code agent potential.

Novita AI

Step 1: Log In and Access the Model Library

Log in to your account and click on the Model Library button.

Log In and Access the Model Library

Step 2: Choose Your Model

Browse through the available options and select the model that suits your needs.

Step 2: Choose Your Model

Step 3: Start Your Free Trial

Begin your free trial to explore the capabilities of the selected model.

Step 3: Start Your Free Trial

Step 4: Get Your API Key

To authenticate with the API, we will provide you with a new API key. Entering the “Settings“ page, you can copy the API key as indicated in the image.

get api key

Step 5: Install the API

Install API using the package manager specific to your programming language.

After installation, import the necessary libraries into your development environment. Initialize the API with your API key to start interacting with Novita AI LLM. This is an example of using chat completions API for python users.

from openai import OpenAI

base_url = "https://api.novita.ai/openai"
api_key = "<Your API Key>"
model = "deepseek/deepseek-v3.1"

client = OpenAI(
    base_url=base_url,
    api_key=api_key,
)

stream = True # or False
max_tokens = 1000

response_format = { "type": "text" }

chat_completion_res = client.chat.completions.create(
    model=model,
    messages=[
        
        {
            "role": "user",
            "content": "Hi there!",
        }
    ],
    stream=stream,
    extra_body={
    }
  )

if stream:
    for chunk in chat_completion_res:
        print(chunk.choices[0].delta.content or "", end="")
else:
    print(chat_completion_res.choices[0].message.content)
  

Use Deepseek V3.1 in TRAE

Step 1: Open Trae and Access Models

Launch the Trae app. Click the Toggle AI Side Bar in the top-right corner to open the AI Side Bar. Then, go to AI Management and select Models.

Toggle AI Side Bar
go to AI Management and select Models

Step 2: Add a Custom Model and Choose Novita as Provider and Select Models

Click the Add Model button to create a custom model entry. In the add-model dialog, select Provider = Novita from the dropdown menu.

From the Model dropdown, pick your desired model (DeepSeek-R1-0528, Kimi K2, GLM 4.5, DeepSeek-V3-0324, or MiniMax-M1-80k). If the exact model isn’t listed, simply type the model ID that you noted from the Novita library. You can directly enter the check ID on provider to get it! Ensure you choose the correct variant of the model you want to use.

Add a Custom Model
Choose Novita as Prov
Add a Custom Model

Step 3: Enter Your API Key

Copy the Novita AI API key from your Novita console and paste it into the API Key field in Trae.

get api key

Best Practices for Using DeepSeek v3.1

Automatic Code Error Fixing

Automatic Code Error Fixing

The tool detects issues in code (e.g., missing .js extensions in import statements) and provides fixes that can be applied directly, as shown by the ‘Apply success’ notification in the first image, and automatically writes the changes to the corresponding file.”

Error Context Collection for Direct Chat Inquiries

When errors occur (like the ERR_MODULE_NOT_FOUND in the second image), the tool gathers the full error stack and context, allowing users to ask questions about the issue directly in the chat interface.

Direct Terminal Command Execution

The interface supports running terminal commands (e.g., npm install expo-image-picker expo-media-library in the third image) with a dedicated “Run” button, streamlining command – based workflows.

DeepSeek V3.1 Trae Access Troubleshooting

  • Connection / API Issues
    • Verify API key and endpoint first.
    • 401 Authentication fails → invalid or expired key.
    • 402 Insufficient Balance → account quota/balance exhausted.
    • 429 Rate Limit → calling too fast; pause and retry.
    • Check DeepSeek API status page if the service is down.
  • Performance Slow
    • Large model → initial responses may lag.
    • If long queries freeze Trae’s chat, reduce input size or examples.
    • Check internet speed (all traffic goes online).
  • Unexpected Output
    • <think> or </think> tags may appear in reasoning mode.
    • These are internal markers and can be ignored.
  • Error Codes
    • 400 → malformed input (check JSON/chat format).
    • 500+ → server-side issues; retry later.
  • Common Fixes
    • Re-check API key, quota, and account balance.
    • Shorten overly long prompts.
    • Ensure Trae has internet access and is updated.
    • Consult DeepSeek API docs for detailed troubleshooting.

DeepSeek V3.1 Security Considerations in Trae

  • Data Privacy
    • All inputs are sent to a remote model when using the cloud API.
    • For sensitive data, consider local deployment or private hosting.
    • Avoid sending proprietary or confidential code via cloud.
  • API Keys & Secrets
    • Treat your API key like a password; never hard-code in shared projects.
    • Trae stores keys in settings—keep your device secure.
    • Rotate keys regularly and use environment variables if possible.
  • Data Handling
    • DeepSeek’s API uses HTTPS encryption.
    • Review your organization’s policy before using third-party LLMs.
    • For maximum security, run a local or private model server.
  • Sandboxing & System Safety
    • Trae runs locally and only calls DeepSeek via API.
    • Minimal risk, but keep Trae updated and maintain antivirus protection.
  • Compliance
    • In regulated industries, confirm DeepSeek use meets governance rules.
    • Some teams anonymize or scrub inputs before sending to APIs.
  • Best Practices
    • Store API keys securely and rotate often.
    • Be cautious with sensitive inputs; redact or anonymize where needed.
    • Rely on encrypted channels (HTTPS).
    • Follow standard API hygiene—no extra security requirements beyond that.

DeepSeek V3.1 brings powerful reasoning, coding, and tool-calling capabilities into the Trae IDE, making it more than just a coding assistant—it becomes an intelligent agent for end-to-end workflows. Compared with peers like Kimi-K2 and Qwen Coder, it offers the most balanced mix of reasoning accuracy, context handling, and tool use. While Trae only connects via API (no local models), setup is straightforward, and once integrated, developers gain a secure, versatile environment for debugging, refactoring, and building code efficiently.

Frequently Asked Questions

Can I run DeepSeek V3.1 locally in Trae?

No. Trae only supports models via API calls, not local deployment.

How do I connect DeepSeek V3.1 to Trae?

Get an API key from a provider (e.g., Novita AI), add the model in Trae’s AI Management settings, and paste your key.

Why is performance slow?

DeepSeek V3.1 is a large model. Long queries or large inputs may take longer to process. Reduce prompt size or check internet speed.

Novita AI is the All-in-one cloud platform that empowers your AI ambitions. Integrated APIs, serverless, GPU Instance — the cost-effective tools you need. Eliminate infrastructure, start free, and make your AI vision a reality.

Recommend Reading

Fine-Tuning DeepSeek R1-0528: More Cost-Effective Solutions

Bare Metal vs On-Demand Instance: Which Is Right for Your Small Business?

DeepSeek R1 0528 Cost: API, GPU, On-Prem Comparison


Discover more from Novita

Subscribe to get the latest posts sent to your email.

Leave a Comment

Scroll to Top

Discover more from Novita

Subscribe now to keep reading and get access to the full archive.

Continue reading