What We Provide

Here are a few of the features we offer:

  • Unified SDK: Collect and utilize interaction data to fine-tune a custom model and continually refine and enhance model performance. Switching requests from your previous LLM provider to your new model is as simple as changing the model name. All our models implement the OpenAI inference format, so you won’t have to change how you parse its response.

  • Data Capture: OpenPipe captures every request and response and stores it for your future use.

    • Request Logs: We help you automatically log your past requests and tag them for easy filtering.
    • Import Data: OpenPipe also allows you to import data for fine-tuning from OpenAI-compatible JSONL files.
    • Export Data: Once your request logs are recorded, you can export them at any time.
  • Fine-Tuning: With all your LLM requests and responses in one place, it’s easy to select the data you want to fine-tune on and kick off a job.

    • Pruning Rules: By removing large chunks of unchanging text and fine-tuning a model on the compacted data, we can reduce the size of incoming requests and save you money on inference.
  • Model Hosting: After we’ve trained your model, OpenPipe will automatically begin hosting it.

    • Caching: Improve performance and reduce costs by caching previously generated responses.
  • Evaluations: Compare your models against one another and OpenAI base models. Set up custom instructions and get quick insights into your models’ performance.

Welcome to the OpenPipe community!