Use the OpenPipe SDK as a drop-in replacement for the generic OpenAI package. Calls sent through the OpenPipe SDK will be recorded by default for later training. You’ll use this same SDK to call your own fine-tuned models once they’re deployed.

Find the SDK at https://pypi.org/project/openpipe/

Simple Integration

Add OPENPIPE_API_KEY to your environment variables.

export OPENPIPE_API_KEY=opk-<your-api-key>
# Or you can set it in your code, see "Complete Example" below

Replace this line

from openai import OpenAI

with this one

from openpipe import OpenAI

Adding Searchable Tags

OpenPipe has a concept of “tagging.” You can use tags in the Request Logs view to narrow down the data your model will train on. We recommend assigning a unique tag to each of your prompts. These tags will help you find all the input/output pairs associated with a certain prompt and fine-tune a model to replace it.

Here’s how you can use the tagging feature:

Complete Example

from openpipe import OpenAI
import os

client = OpenAI(
    # defaults to os.environ.get("OPENAI_API_KEY")
    api_key="My API Key",
    openpipe={
        # defaults to os.environ.get("OPENPIPE_API_KEY")
        "api_key": "My OpenPipe API Key",
        # optional, defaults to process.env["OPENPIPE_BASE_URL"] or https://app.openpipe.ai/api/v1 if not set
        "base_url": "My URL",
    }
)

completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "system", "content": "count to 10"}],
    openpipe={
      "tags": {"prompt_id": "counting", "any_key": "any_value"},
      "log_request": True # Enable/disable data collection. Defaults to True.
    },
)

Should I Wait to Enable Logging?

We recommend keeping request logging turned on from the beginning. If you change your prompt you can just set a new prompt_id tag so you can select just the latest version when you’re ready to create a dataset.