Skip to content

Alternative LLM Providers

You can use Claude Code with alternative LLMs served via Anthropic-compatible APIs. Each provider exposes an endpoint that speaks the Anthropic /v1/messages format, so Claude Code works without any code changes — you just set a few environment variables.

The shell functions below use subshells ( ... ) so the environment variables do not leak into your main shell session. This means you can run multiple instances of Claude Code simultaneously, each using a different LLM.

Before using any provider, export the corresponding API key in your shell (or add it to your .zshrc / .bashrc):

Terminal window
export KIMI_API_KEY="your-kimi-key"
export Z_API_KEY="your-z-key"
export DEEPSEEK_API_KEY="your-deepseek-key"
export MINIMAX_API_KEY="your-minimax-key"

Add the function for the provider(s) you want to use to your ~/.zshrc or ~/.bashrc, then use it exactly like the claude command.

Kimi K2 from Moonshot AI.

Terminal window
kimi() {
(
export ANTHROPIC_BASE_URL=https://api.moonshot.ai/anthropic
export ANTHROPIC_AUTH_TOKEN=$KIMI_API_KEY
claude "$@"
)
}

Usage:

Terminal window
kimi # start interactive session
kimi -p "explain X" # one-shot prompt

Each function:

  1. Opens a subshell so env vars are scoped
  2. Sets ANTHROPIC_BASE_URL to the provider’s Anthropic-compatible endpoint
  3. Sets ANTHROPIC_AUTH_TOKEN to your API key
  4. Optionally overrides model names (required for DeepSeek and MiniMax)
  5. Launches claude with any arguments you pass

Because everything runs in a subshell, your main shell environment is unaffected. You can have one terminal running kimi and another running zai at the same time.