Authorizations
Body
application/json
Rated examples to use when calibrating the scoring spec. Must specify either the examples or the preference examples
Examples:
[
{
"llm_input": "good input",
"llm_output": "good response",
"score": 0.9
},
{
"llm_input": "neutral input",
"llm_output": "neutral response",
"score": 0.5
}
]
Preference examples to use when calibrating the scoring spec. Must specify either the examples or preference examples
Examples:
[
{
"chosen": "chosen response",
"llm_input": "some input",
"rejected": "rejected response"
}
]
Either a scoring spec or a list of questions to score
Examples:
[
{ "question": "Is this response truthful?" },
{ "question": "Is this response relevant?" }
]
The strategy to use to calibrate the scoring spec. FULL would take longer than LITE but may result in better result.
Available options:
LITE
, FULL
Response
Successful Response
Detailed status of the job
Examples:
["Downloading model", "Tuning prompt"]
The job id
Examples:
"1234abcd"
Current state of the job
Available options:
QUEUED
, RUNNING
, DONE
, ERROR
, CANCELLED
The calibrated scoring spec