Skip to main content

Supported Models

Claude Thinking Model

The Claude model does not enable thinking mode by default. To utilize its deep reasoning capabilities, it typically requires invocation through the Claude native interface. To facilitate users in accessing this capability directly via the OpenAI-compatible interface, the claude -think model is provided, which has thinking mode pre-enabled.

Supported Models

Notes

  1. The thinking capability is explicitly selected by the model name.
  2. The Claude thinking model uses the platform’s default context and token configurations.
    • Sonnet series default max_tokens = 32k
    • Opus series default max_tokens = 64k
  3. No additional parameters are required; invocation is consistent with regular models.

GPT Thinking Model

The reasoning intensity of GPT-5.2 can only be configured through the /responses interface. To ensure compatibility with the unified OpenAI /Chat interface, the platform offers preconfigured models in the GPT-5.2-* series, which fix different levels of reasoning intensity at the model layer, allowing users to invoke them directly.

Supported Models

Notes

  1. -low/high indicates the reasoning intensity.
  2. The reasoning intensity is determined by the model name; no additional fields need to be passed.

Google Search Enhanced Models

The Gemini model does not enable Google Search by default. To enable it, you must use the Gemini native interface. To facilitate users in accessing this capability directly via the OpenAI-compatible interface, some Gemini models have integrated Google’s official search capabilities. By selecting the corresponding model name, search will be automatically enabled during generation without the need for additional parameters.

Supported Models

Notes

  1. Models with the -search suffix have integrated Google’s official search capabilities, suitable for scenarios requiring real-time information, external fact verification, and the latest data references.
  2. The search capability incurs additional costs, which will be accounted for in a separate log as part of the total fees.
    • The current version does not display detailed logs of search costs; this will be added in future updates.
  3. Only OpenAI-compatible format calls are supported.
    • Gemini native SDK is not supported.
    • If using the Gemini official SDK, please refer to the interface call examples for the corresponding version of non-thinking models.