POST
/
data
/
generate
import PiClient from 'withpi';

const client = new PiClient({
  apiKey: process.env['WITHPI_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const dataGenerationStatus = await client.data.generate.startJob({
    application_description: "Write a children's story communicating a simple life lesson.",
    num_inputs_to_generate: 50,
    seeds: [
      'The quick brown fox jumped over the lazy dog',
      'The lazy dog was jumped over by the quick brown fox',
    ],
  });

  console.log(dataGenerationStatus.job_id);
}

main();
{
  "data": [
    "The quick brown fox jumped over the lazy dog",
    "The lazy dog was jumped over by the quick brown fox"
  ],
  "detailed_status": [
    "Downloading model",
    "Tuning prompt"
  ],
  "job_id": "1234abcd",
  "state": "RUNNING"
}

Authorizations

x-api-key
string
header
required

Body

application/json
application_description
string
required

The application description for which the inputs would be applicable.

Example:

"Write a children's story communicating a simple life lesson."

num_inputs_to_generate
integer
required

The number of new LLM inputs to generate

Example:

50

seeds
string[]
required

The list of LLM inputs to be used as seeds

Example:
[
  "The quick brown fox jumped over the lazy dog",
  "The lazy dog was jumped over by the quick brown fox"
]
batch_size
integer
default:5

Number of inputs to generate in one LLM call. Must be <= 10. Generally it could be same as num_shots.

Example:

5

exploration_mode
enum<string>

The exloration mode for input generation. Defaults to BALANCED

Available options:
CONSERVATIVE,
BALANCED,
CREATIVE,
ADVENTUROUS
num_shots
integer
default:5

Number of inputs to be included in the prompt for generation. Must be <= 10. Generally it could be same as batch_size.

Example:

5

Response

200
application/json
Successful Response

DataGenerationStatus is the result of a data generation job.

detailed_status
string[]
required

Detailed status of the job

Example:
["Downloading model", "Tuning prompt"]
job_id
string
required

The job id

Example:

"1234abcd"

state
enum<string>
required

Current state of the job

Available options:
QUEUED,
RUNNING,
DONE,
ERROR,
CANCELLED
data
string[] | null

The generated data. Can be present even if the state is not done/error as it is streamed.

Example:
[
  "The quick brown fox jumped over the lazy dog",
  "The lazy dog was jumped over by the quick brown fox"
]