Submit LLM Judge labeling Job
POST/api/2/auto_eval/pseudo_label_job/submit
Submit an LLM Judge labeling job.
Request
- application/json
Body
Array [
]
jobId
object
required
pseudoLabelJobConfig
object
required
Subset of columns to be used in pseudo-labeling. Expected columns: input, output, ground_truth For example, a summarization task might not need an input column. TODO: Should this be repeated EvaluationMetricParameter enum?
Optional name for the job.
Optional description for the job.
datasetId
object
required
fewShotDatasetId
object
activeLabeledDatasetId
object
Subset of columns to be used in pseudo-labeling. Expected columns: input, output, ground_truth For example, a summarization task might not need an input column. TODO: Should this be repeated EvaluationMetricParameter enum?
chatCompletionConfig
object
required
The list of messages in the conversation so far.
The ID of the model to use for the completion.
messages
object[]
required
The list of messages in the conversation so far.
Role can be 'system', 'user', or 'assistant'.
The content of the message.
The maximum number of tokens to generate.
Possible values: >= -2147483648
and <= 2147483647
The temperature to use for the completion.
The top_p value to use for the completion.
promptTemplate
object
required
The template string that defines the prompt
TODO: @Ankush flesh out default prompt templates or "Base Metric" representation of prompt lmai.proto.model_fine_tuning.v1.templates.
If true, skip active labeling.
Responses
- 200
Successful operation
- application/json
- Schema
- Example (from schema)
Schema
jobId
object
required
{
"jobId": {
"value": "string"
}
}