SDK
The simplest way to start ingesting request logs into OpenPipe is by installing our Python or TypeScript SDK. Requests to both OpenAI and OpenPipe models will automatically be recorded. Logging doesn’t add any latency to your requests, because our SDK calls the OpenAI server directly and returns your completion before kicking off the request to record it in your project. We provide a drop-in replacement for the OpenAI SDK, so the only code you need to update is your import statement:Proxy
If you’re developing in a language other than Python or TypeScript, the best way to ingest data into OpenPipe is through our proxy. We provide a/chat/completions
endpoint that is fully compatible
with OpenAI, so you can continue using the latest features like tool calls and streaming without a hitch.
Integrating the Proxy and logging requests requires a couple steps.
- Add an OpenAI key to your project in the project settings page.
- Set the authorization token of your request to be your OpenPipe API key.
- Set the destination url of your request to be
https://api.openpipe.ai/api/v1/chat/completions
. - When making any request that you’d like to record, include the
"store": true
parameter in the request body. We also recommend that you add custom metadata tags to your request to distinguish data collected from different prompts.
Reporting
If you need more flexibility in how you log requests, you can use thereport
endpoint. This gives you full control over when and how to create request logs.