Use the OpenPipe SDK as a drop-in replacement for the generic OpenAI package. Calls sent through the OpenPipe SDK will be recorded by default for later training. You’ll use this same SDK to call your own fine-tuned models once they’re deployed.
OpenPipe follows OpenAI’s concept of metadata tagging for requests. You can use metadata tags in the Request Logs view to narrow down the data your model will train on.
We recommend assigning a unique metadata tag to each of your prompts.
These tags will help you find all the input/output pairs associated with a certain prompt and fine-tune a model to replace it.
from openpipe import OpenAIimport osclient = OpenAI( # defaults to os.environ.get("OPENAI_API_KEY") api_key="My API Key", openpipe={ # defaults to os.environ.get("OPENPIPE_API_KEY") "api_key": "My OpenPipe API Key", # optional, defaults to process.env["OPENPIPE_BASE_URL"] or https://api.openpipe.ai/api/v1 if not set "base_url": "My URL", })completion = client.chat.completions.create( model="gpt-4o", messages=[{"role": "system", "content": "count to 10"}], metadata={"prompt_id": "counting", "any_key": "any_value"},)
OpenPipe follows OpenAI’s concept of metadata tagging for requests. You can use metadata tags in the Request Logs view to narrow down the data your model will train on.
We recommend assigning a unique metadata tag to each of your prompts.
These tags will help you find all the input/output pairs associated with a certain prompt and fine-tune a model to replace it.
from openpipe import OpenAIimport osclient = OpenAI( # defaults to os.environ.get("OPENAI_API_KEY") api_key="My API Key", openpipe={ # defaults to os.environ.get("OPENPIPE_API_KEY") "api_key": "My OpenPipe API Key", # optional, defaults to process.env["OPENPIPE_BASE_URL"] or https://api.openpipe.ai/api/v1 if not set "base_url": "My URL", })completion = client.chat.completions.create( model="gpt-4o", messages=[{"role": "system", "content": "count to 10"}], metadata={"prompt_id": "counting", "any_key": "any_value"},)
OpenPipe follows OpenAI’s concept of metadata tagging for requests. You can use metadata tags in the Request Logs view to narrow down the data your model will train on.
We recommend assigning a unique metadata tag to each of your prompts.
These tags will help you find all the input/output pairs associated with a certain prompt and fine-tune a model to replace it.
import OpenAI from "openpipe/openai";// Fully compatible with original OpenAI initializationconst openai = new OpenAI({ apiKey: "my api key", // defaults to process.env["OPENAI_API_KEY"] // openpipe key is optional openpipe: { apiKey: "my api key", // defaults to process.env["OPENPIPE_API_KEY"] baseUrl: "my url", // defaults to process.env["OPENPIPE_BASE_URL"] or https://api.openpipe.ai/api/v1 if not set },});const completion = await openai.chat.completions.create({ messages: [{ role: "user", content: "Count to 10" }], model: "gpt-4o", // optional metadata: { prompt_id: "counting", any_key: "any_value", }, store: true, // Enable/disable data collection. Defaults to true.});
OpenPipe follows OpenAI’s concept of metadata tagging for requests. You can use metadata tags in the Request Logs view to narrow down the data your model will train on.
We recommend assigning a unique metadata tag to each of your prompts.
These tags will help you find all the input/output pairs associated with a certain prompt and fine-tune a model to replace it.
import OpenAI from "openpipe/openai";// Fully compatible with original OpenAI initializationconst openai = new OpenAI({ apiKey: "my api key", // defaults to process.env["OPENAI_API_KEY"] // openpipe key is optional openpipe: { apiKey: "my api key", // defaults to process.env["OPENPIPE_API_KEY"] baseUrl: "my url", // defaults to process.env["OPENPIPE_BASE_URL"] or https://api.openpipe.ai/api/v1 if not set },});const completion = await openai.chat.completions.create({ messages: [{ role: "user", content: "Count to 10" }], model: "gpt-4o", // optional metadata: { prompt_id: "counting", any_key: "any_value", }, store: true, // Enable/disable data collection. Defaults to true.});
We recommend keeping request logging turned on from the beginning. If you change your prompt you can just set a new prompt_id metadata tag so you can select just the latest version when you’re ready to create a dataset.