We’re currently beta-testing a novel completion generating technique we’re calling “Mixture of Agents,” which we’ll document more formally soon.

The basic idea is that instead of simply asking GPT-4 to generate a completion for your prompt directly, we use a series of GPT-4 prompts to iteratively improve the completion. The steps our “mixture of agents” model takes are as follows:

  • Prompt 1 generates 3 candidate completions in parallel by calling the chosen base model with n=3 and a high temperature to promote output diversity.
  • Prompt 2 again calls the base model. It passes in the original input again, along with the 3 candidate completions generated by prompt 1. It then asks the LLM to review the candidate completions and critique them.
  • Prompt 3 again passes the original input, the 3 candidate completions, and their critiques. Using this information, the base model generates a final completion that incorporates the best of all 3 candidates.

We’ve iterated on this process significantly and found that completions generated in this way tend to be significantly higher quality than those generated by GPT-4 in a single step, and lead to much stronger downstream fine-tuned models as well.

Using MoA in Production

To use MoA models at inference time, make requests to the /chat/completions endpoint with a MoA model. See instructions.

Using the MoA Relabeling Flow

The following instructions explain how to copy an existing dataset and relabel it with the mixture-of-agents flow, which will let you train models on the higher-quality outputs.

  1. Export the original dataset

    Navigate to your existing OpenPipe dataset and click the “Export” button in the upper right. Keep the “Include split” checkbox checked. You’ll download a .jsonl file with the contents of your dataset (this may take a few minutes).

  2. Re-import the dataset

    Create a new dataset in your project. Import the file you exported from step (1). Once the import finishes, your new dataset should contain a copy of the same data as the old one.

  3. Open the Data Pipeline view

    Navigate to the Data Pipeline tab in the new dataset, then expand the Data Pipeline view by hovering over and clicking the data pipeline preview.

  4. Select the “LLM Relabel” node for the file you just uploaded. Then in the sidebar, choose one of moa-gpt-4-v1, moa-gpt-4-turbo-v1, or moa-gpt-4o-v1, depending on which model you’d like to use as your MoA base. Note: we use your API key for relabelling, so you’ll need to have entered a valid OpenAI API key in your project settings for this to work.

  5. Wait for relabeling to finish

    Depending on your dataset size relabelling may take quite a while. Behind the scenes we run 4 relabelling jobs in parallel at a time. You’ll know relabeling has finished when the “Processing entries” status disappears at the top right of the dataset view.

  6. Train a model on the new dataset

    Train the base model of your choice on the new dataset.

  7. (Optional) Evaluate your new model against your old one

    If you have an existing head-to-head evaluation on the platform, you can easily add your new model to it to see how it compares. Simply open your existing eval and add your newly-trained model as another model to compare!

Costs

We aren’t charging for the MoA relabeling flow while it is in beta. However, you will pay for the actual calls to the OpenAI API. The exact cost varies depending on your input vs output mix but as a rule of thumb our MoA approach uses 3x-4x as many tokens as running the same completion in a non-MoA context.