# OpenPipe ## Docs - [Get Model](https://docs.openpipe.ai/api-reference/get-getModel): Get a model by ID. Consult the OpenPipe team before using. - [List Models](https://docs.openpipe.ai/api-reference/get-listModels): List all models for a project. Consult the OpenPipe team before using. - [Chat Completions](https://docs.openpipe.ai/api-reference/post-chatcompletions): OpenAI-compatible route for generating inference and optionally logging the request. - [Judge Criteria](https://docs.openpipe.ai/api-reference/post-criteriajudge): Get a judgement of a completion against the specified criterion - [Report](https://docs.openpipe.ai/api-reference/post-report): Record request logs from OpenAI models - [Report Anthropic](https://docs.openpipe.ai/api-reference/post-report-anthropic): Record request logs from Anthropic models - [Update Log Metadata](https://docs.openpipe.ai/api-reference/post-updatemetadata): Update tags metadata for logged calls matching the provided filters. - [Base Models](https://docs.openpipe.ai/base-models): Train and compare across a range of the most powerful base models. - [Caching](https://docs.openpipe.ai/features/caching): Improve performance and reduce costs by caching previously generated responses. - [Anthropic Proxy](https://docs.openpipe.ai/features/chat-completions/anthropic) - [Custom External Models](https://docs.openpipe.ai/features/chat-completions/external-models) - [Mixture of Agents Chat Completions](https://docs.openpipe.ai/features/chat-completions/moa) - [Chat Completions](https://docs.openpipe.ai/features/chat-completions/overview) - [Prompt Prefilling](https://docs.openpipe.ai/features/chat-completions/prompt-prefilling): Use Prompt Prefilling to control the initial output of the completion. - [Criterion Alignment Sets](https://docs.openpipe.ai/features/criteria/alignment-set): Use alignment sets to test and improve your criteria. - [API Endpoints](https://docs.openpipe.ai/features/criteria/api): Use the Criteria API for runtime evaluation and offline testing. - [Criteria](https://docs.openpipe.ai/features/criteria/overview): Align LLM judgements with human ratings to evaluate and improve your models. - [Criteria Quick Start](https://docs.openpipe.ai/features/criteria/quick-start): Create and align your first criterion. - [Exporting Data](https://docs.openpipe.ai/features/datasets/exporting-data): Export your past requests as a JSONL file in their raw form. - [Importing Request Logs](https://docs.openpipe.ai/features/datasets/importing-logs): Search and filter your past LLM requests to inspect your responses and build a training dataset. - [Datasets](https://docs.openpipe.ai/features/datasets/overview): Collect, evaluate, and refine your training data. - [Datasets Quick Start](https://docs.openpipe.ai/features/datasets/quick-start): Create your first dataset and import training data. - [Relabeling Data](https://docs.openpipe.ai/features/datasets/relabeling-data): Use powerful models to generate new outputs for your data before training. - [Uploading Data](https://docs.openpipe.ai/features/datasets/uploading-data): Upload external data to kickstart your fine-tuning process. Use the OpenAI chat fine-tuning format. - [Direct Preference Optimization (DPO)](https://docs.openpipe.ai/features/dpo/overview) - [DPO Quick Start](https://docs.openpipe.ai/features/dpo/quick-start): Train your first DPO fine-tuned model with OpenPipe. - [Evaluations](https://docs.openpipe.ai/features/evaluations/overview): Evaluate your fine-tuned models against comparison LLMs like GPT-4 and GPT-4-Turbo. Add and remove models from the evaluation, and customize the evaluation criteria. - [Evaluations Quick Start](https://docs.openpipe.ai/features/evaluations/quick-start): Create your first head to head evaluation. - [Fallback options](https://docs.openpipe.ai/features/fallback): Safeguard your application against potential failures, timeouts, or instabilities that may occur when using experimental or newly released models. - [Fine Tuning via API (Beta)](https://docs.openpipe.ai/features/fine-tuning/api): Fine tune your models programmatically through our API. - [Fine-Tuning Quick Start](https://docs.openpipe.ai/features/fine-tuning/quick-start): Train your first fine-tuned model with OpenPipe. - [Fine Tuning via Webapp](https://docs.openpipe.ai/features/fine-tuning/webapp): Fine tune your models on filtered logs or uploaded datasets. Filter by prompt id and exclude requests with an undesirable output. - [Mixture of Agents](https://docs.openpipe.ai/features/mixture-of-agents): Use Mixture of Agents to increase quality beyond SOTA models. - [Pruning Rules](https://docs.openpipe.ai/features/pruning-rules): Decrease input token counts by pruning out chunks of static text. - [Exporting Logs](https://docs.openpipe.ai/features/request-logs/exporting-logs): Export your past requests as a JSONL file in their raw form. - [Logging Requests](https://docs.openpipe.ai/features/request-logs/logging-requests): Record production data to train and improve your models' performance. - [Logging Anthropic Requests](https://docs.openpipe.ai/features/request-logs/reporting-anthropic) - [Updating Metadata Tags](https://docs.openpipe.ai/features/updating-metadata) - [Installing the SDK](https://docs.openpipe.ai/getting-started/openpipe-sdk) - [Quick Start](https://docs.openpipe.ai/getting-started/quick-start): Get started with OpenPipe in a few quick steps. - [OpenPipe Documentation](https://docs.openpipe.ai/introduction): Software engineers and data scientists use OpenPipe's intuitive fine-tuning and monitoring services to decrease the cost and latency of their LLM operations. You can use OpenPipe to collect and analyze LLM logs, create fine-tuned models, and compare output from multiple models given the same input. - [Overview](https://docs.openpipe.ai/overview): OpenPipe is a streamlined platform designed to help product-focused teams train specialized LLM models as replacements for slow and expensive prompts. - [Pricing Overview](https://docs.openpipe.ai/pricing/pricing)