OpenAI API

Overview

Deeptrin provides compatibility with the OpenAI API standard, allowing for easier integration into existing applications. This API supports Chat Completion and Completion endpoints, both in streaming and regular modes.

Base URL

https://api.deeptrin.com/inference-api/openai/v1

API Key

To use the API, you need to obtain a Deeptrin AI API Key. For detailed instructions, please refer to the authentication documentation.

Supported Models

  • meta-llama/Meta-Llama-3.1-8B-Instruct

  • meta-llama/Meta-Llama-3.1-70B-Instruct

  • ...

As AI models continue to evolve, we will regularly update the list of supported models. While some models may be removed, we will strive to handle the transition in a way that ensures compatibility for users already integrated with these model APIs. For detailed information on the transition process, please refer to this.

Supported APIs

  1. Chat Completion (streaming and regular)

  2. Completion (streaming and regular)

Usage Examples

Python Client

First, install the OpenAI Python client:

pip install 'openai>=1.0.0'

Chat Completions API

from openai import OpenAI

client = OpenAI(
    base_url="https://api.deeptrin.com/inference-api/openai/v1",
    api_key="<YOUR Deeptrin API Key>",
)

model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
stream = True  # or False
max_tokens = 512

chat_completion_res = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "system",
            "content": "Act like you are a helpful assistant.",
        },
        {
            "role": "user",
            "content": "Hi there!",
        }
    ],
    stream=stream,
    max_tokens=max_tokens,
)

if stream:
    for chunk in chat_completion_res:
        print(chunk.choices[0].delta.content or "", end="")
else:
    print(chat_completion_res.choices[0].message.content)

Completions API

from openai import OpenAI

client = OpenAI(
    base_url="https://api.deeptrin.com/inference-api/openai/v1",
    api_key="<YOUR Deeptrin API Key>",
)

model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
stream = True  # or False
max_tokens = 512

completion_res = client.completions.create(
    model=model,
    prompt="A chat between a curious user and an artificial intelligence assistant.\nYou are a cooking assistant.\nBe edgy in your cooking ideas.\nUSER: How do I make pasta?\nASSISTANT: First, boil water. Then, add pasta to the boiling water. Cook for 8-10 minutes or until al dente. Drain and serve!\nUSER: How do I make it better?\nASSISTANT:",
    stream=stream,
    max_tokens=max_tokens,
)

if stream:
    for chunk in completion_res:
        print(chunk.choices[0].text or "", end="")
else:
    print(completion_res.choices[0].text)

cURL Client

Chat Completions API

# Set your API key
export API_KEY="<YOUR Deeptrin API Key>"

curl "https://api.deeptrin.com/inference-api/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d $'{
    "model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
    "messages": [
        {
            "role": "system",
            "content": "Act like you are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hi there!"
        }
    ],
    "max_tokens": 512
}'

Completions API

# Set your API key
export API_KEY="<YOUR Deeptrin API Key>"

curl "https://api.deeptrin.com/inference-api/openai/v1/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d $'{
    "model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
    "prompt": "A chat between a curious user and an artificial intelligence assistant.\\n You are a cooking assistant.\\n Be edgy in your cooking ideas.\\n USER: How do I make pasta?\\n ASSISTANT: First, boil water. Then, add pasta to the boiling water. Cook for 8-10 minutes or until al dente. Drain and serve!\\n USER: How do I make it better?\\n ASSISTANT:",
    "max_tokens": 512
}'

Model Parameters

Please note that we are not yet 100% compatible with all OpenAI parameters. If you encounter any issues, you can start a discussion in our Discord server channel #issues.

Supported Parameters

  • model: Specify the model to use. Find all supported models here.

  • messages: (ChatCompletion only) An array of message objects with roles (system, user, assistant) and content.

  • prompt: (Completion only) The prompt to generate completions for.

  • max_tokens: The maximum number of tokens to generate.

  • stream: If set to true, partial message deltas will be sent as they become available.

  • temperature: Controls randomness in output generation (0-2).

  • top_p: Alternative to temperature, controls diversity via nucleus sampling.

  • stop: Up to 4 sequences where the API will stop generating further tokens.

  • n: Number of chat completion choices to generate for each input message.

  • presence_penalty: Penalizes new tokens based on their presence in the generated text so far.

  • frequency_penalty: Penalizes new tokens based on their frequency in the generated text so far.

  • repetition_penalty: Penalizes new tokens based on their appearance in the prompt and generated text.

  • logit_bias: Modifies the likelihood of specified tokens appearing in the output.

Migrating from OpenAI

If you're already using OpenAI's chat completion endpoint, you can easily switch to Deeptrin by:

  1. Setting the base URL to https://inference-api.deeptrin.ai/inference-api/openai/v1

  2. Obtaining and setting your Deeptrin API Key

  3. Updating the model name according to your needs

For more information or support, please visit our website or join our Discord server.

Last updated