Supported Models
- claude-sonnet-4-5-think
- claude-opus-4-5-think
- gpt-5.2-high
- gpt-5.2-low
- gemini-3-pro-preview-search
- gemini-3-flash-preview-search
Claude Thinking Model
The Claude model does not enable thinking mode by default. To utilize its deep reasoning capabilities, it typically requires invocation through the Claude native interface. To facilitate users in accessing this capability directly via the OpenAI-compatible interface, the claude-think model is provided, which has thinking mode pre-enabled.
Supported Models
Notes
- The thinking capability is explicitly selected by the model name.
- The Claude thinking model uses the platform’s default context and token configurations.
- Sonnet series default
max_tokens = 32k - Opus series default
max_tokens = 64k
- Sonnet series default
- No additional parameters are required; invocation is consistent with regular models.
GPT Thinking Model
The reasoning intensity of GPT-5.2 can only be configured through the/responses interface. To ensure compatibility with the unified OpenAI /Chat interface, the platform offers preconfigured models in the GPT-5.2-* series, which fix different levels of reasoning intensity at the model layer, allowing users to invoke them directly.
Supported Models
Notes
- -low/high indicates the reasoning intensity.
- The reasoning intensity is determined by the model name; no additional fields need to be passed.
Google Search Enhanced Models
The Gemini model does not enable Google Search by default. To enable it, you must use the Gemini native interface. To facilitate users in accessing this capability directly via the OpenAI-compatible interface, some Gemini models have integrated Google’s official search capabilities. By selecting the corresponding model name, search will be automatically enabled during generation without the need for additional parameters.Supported Models
Notes
- Models with the
-searchsuffix have integrated Google’s official search capabilities, suitable for scenarios requiring real-time information, external fact verification, and the latest data references. - The search capability incurs additional costs, which will be accounted for in a separate log as part of the total fees.
- The current version does not display detailed logs of search costs; this will be added in future updates.
- Only OpenAI-compatible format calls are supported.
- Gemini native SDK is not supported.
- If using the Gemini official SDK, please refer to the interface call examples for the corresponding version of non-thinking models.