Who We Are

We’re a team of full-stack engineers and machine learning researchers working to streamline the process of integrating fine-tuned models into any application. Our goal is to make the fine-tuning process accessible to everyone.

What We Provide

Here are a few of the features we offer:

  • Data Capture: OpenPipe automatically captures every request and response sent through our drop-in replacement sdk and stores it for your future use.

  • Monitoring: OpenPipe provides intuitive tools to view the frequency and cost of your LLM requests, and provides a special tool for viewing requests with error status codes.

  • Searchable Logs: We enable you to search your past requests, and provide a simple protocol for tagging them by prompt id for easy filtering.

  • Fine-Tuning: With all your LLM requests and responses in one place, it’s easy to select the data you want to fine-tune on and kick off a job.

  • Model Hosting: After we’ve trained your model, OpenPipe will automatically begin hosting it. Accessing your model will require an API key from your project.

  • Unified SDK: Switching requests from your previous LLM provider to your new model is as simple as changing the model name. All our models implement the OpenAI inference format, so you won’t have to change how you parse its response.

  • Data Export: OpenPipe allows you to download your request logs or the fine-tuned models you’ve trained at any time for easy self-hosting.

  • Experimentation: The fine-tunes you’ve created on OpenPipe are immediately available for you to run inference on in our experimentation playground.

Welcome to the OpenPipe community!