Add deployed models for this deployment
Cancels jobs for an experiment version or batch prediction.
Clears the cache for ml api requests.
Create a deployment
Creates an alias for a deployment.
Create a prediction configuration
Create an experiment
Creates an experiment version.
Poll this version and check its status
field to determine when models
are finished training.
Starts creating profile insights for an experiment dataset.
This is an asynchronous operation. A 202 Accepted
response indicates
that the process has started successfully. Use the link in the response
to check the status.
Deactivate the model for this deployment
Delete a deployment
Delete an alias from a deployment.
Delete a batch prediction
Deletes the schedule from a batch prediction.
Delete an experiment
Delete an experiment version
Get a deployment
Retrieves an alias that exists on the deployment.
Retrieves a list of aliases based on filter parameters for a deployment.
Retrieve a batch prediction
List batch prediction configurations
Retrieves the schedule for a batch prediction.
List deployments
Get an experiment
Get a model
List models
Retrieves a list of experiments based on provided filter and sort parameters.
Get an experiment version
List experiment versions
Retrieves profile insights for the specified dataset. If you received a
202 Accepted
response from POST /ml/profile-insights
, poll this
endpoint until a 200 OK
response with ready
status is returned.
Retrieves profile insights for the specified dataset. If you received a
202 Accepted
response from POST /ml/profile-insights
, poll this
endpoint until a 200 OK
response with ready
status is returned.
Update a deployment
Updates an alias for a deployment.
Updates a batch prediction
Update an experiment
Update an experiment version
Run a batch prediction
Returns model recommendations for a specified experiment, including the best-performing, fastest, and most accurate models based on evaluation metrics.
Remove deployed models from this deployment
Generate predictions in a synchronous request/response
Generate predictions in a synchronous request/response
Adds a schedule to a batch prediction.
Updates the schedule for a batch prediction.
Activate the model for this deployment