POST
/
prompt
/
optimize
import PiClient from 'withpi';

const client = new PiClient({
  apiKey: process.env['WITHPI_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const promptOptimizationStatus = await client.prompt.optimize.startJob({
    examples: [
      {
        llm_input: 'Tell me something different',
        llm_output: 'The lazy dog was jumped over by the quick brown fox',
      },
    ],
    initial_system_instruction: 'Write a great story around the given topic.',
    model_id: 'gpt-4o-mini',
    scoring_spec: {
      description: "Write a children's story communicating a simple life lesson.",
      dimensions: [
        {
          description: 'dimension1 description',
          label: 'dimension1',
          sub_dimensions: [
            { description: 'subdimension1 description', label: 'subdimension1', scoring_type: 'PI_SCORER' },
          ],
        },
      ],
      name: 'Sample Scoring Spec',
    },
    tuning_algorithm: 'DSPY',
  });

  console.log(promptOptimizationStatus.job_id);
}

main();
{
  "detailed_status": [
    "Downloading model",
    "Tuning prompt"
  ],
  "job_id": "1234abcd",
  "optimized_prompt_messages": [
    {
      "content": "Write a great story around the given topic.",
      "role": "system"
    },
    {
      "content": "{{ input }}",
      "role": "user"
    }
  ],
  "state": "RUNNING"
}

Authorizations

x-api-key
string
header
required

Body

application/json
examples
object[]
required

The examples (input-response pairs) to train and validate on

An example for training or evaluation

initial_system_instruction
string
required

The initial system instruction

Example:

"Write a great story around the given topic."

model_id
enum<string>
required

The model to use for generating responses

Available options:
gpt-4o-mini,
llama-3.1-8b,
mock-llm
scoring_spec
object
required

The scoring spec to optimize

tuning_algorithm
enum<string>
required

The tuning algorithm to use

Available options:
DSPY,
PI
dspy_optimization_type
enum<string> | null

The DSPY teleprompter/optimizer to use. This only applies for the DSPY. Leave it as None if tuning_algorithm != DSPY.

Available options:
BOOTSTRAP_FEW_SHOT,
COPRO,
MIPROv2
Example:

"COPRO"

use_chain_of_thought
boolean
default:false

Decides if to use chain of thought or not. This only applies for the DSPY. Leave it as None if tuning_algorithm != DSPY.

Example:

false

Response

200
application/json
Successful Response

The optimized_prompt_messages field is an empty list unless the state is done.

detailed_status
string[]
required

Detailed status of the job

Example:
["Downloading model", "Tuning prompt"]
job_id
string
required

The job id

Example:

"1234abcd"

state
enum<string>
required

Current state of the job

Available options:
QUEUED,
RUNNING,
DONE,
ERROR,
CANCELLED
optimized_prompt_messages
object[]

The optimized prompt messages in the OpenAI message format with the jinja {{ input }} variable for the next user prompt

Example:
[
  {
    "content": "Write a great story around the given topic.",
    "role": "system"
  },
  { "content": "{{ input }}", "role": "user" }
]