Features
Running Inference
Once your fine-tuned model is deployed, you’re ready to start running inference.
First, make sure you’ve set up the SDK properly. See the OpenPipe SDK section for more details. Once the SDK is installed and you’ve added the right
OPENPIPE_API_KEY
to your environment variables, you’re almost done.
The last step is to update the model that you’re querying to match the ID of your new fine-tuned model.
Python
NodeJS
from openpipe import OpenAI
# Find the config values in "Installing the SDK"
client = OpenAI()
completion = client.chat.completions.create(
# model="gpt-3.5-turbo", - original model
model="openpipe:your-fine-tuned-model-id",
messages=[{"role": "system", "content": "count to 10"}],
openpipe={"tags": {"prompt_id": "counting", "any_key": "any_value"}},
)
Queries to your fine-tuned models will now be shown in the Request Logs panel.
