POST
/
chat
/
completions
Python
from openai import OpenAI

client = OpenAI(
    base_url="https://aihubmix.com/v1",
    api_key="AIHUBMIX_API_KEY"
)

completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello"}]
)

print(completion.choices[0].message)
{
  "choices": [
    {
      "message": {
        "role": "<string>",
        "content": "<string>"
      },
      "finish_reason": "<string>"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  }
}
The API Playground provides a sandbox environment for real-time request testing and intuitive response data visualization.
For code security, we recommend:
  1. Managing sensitive information (such as API Keys) through environment variables. For Python calls, use os.getenv("AIHUBMIX_API_KEY").
  2. Avoiding printing sensitive information in logs/outputs.
  3. Preventing key leakage by adding .env to .gitignore to keep secrets out of the code repository.

Authorizations

Authorization
string
header
required

Bearer authentication. Add Authorization: Bearer AIHUBMIX_API_KEY in request headers. Get your API key here.

Body

application/json
model
string
required

Model ID to use. Check in the Model Hub.

messages
object[]
required

Conversation messages including role and content.

temperature
number
default:0.8

Sampling temperature (0-2). Higher values yield more randomness.

max_tokens
integer
default:1024

Maximum number of tokens to generate (depends on model).

top_p
number
default:1

Top-p nucleus sampling parameter controlling diversity.

frequency_penalty
number
default:0

Frequency penalty to reduce repetition.

presence_penalty
number
default:0

Presence penalty to encourage new topics.

stream
boolean
default:false

Enable streaming responses for real-time output.

web_search_options
object

Web search options (only supported by specific search models).

Response

Successful response

choices
object[]
usage
object