from enum import StrEnum
[docs]
class Action(StrEnum):
CHAT = "chat"
COMPLETE = "complete"
DETECT = "detect"
EMBED = "embed"
PREDICT = "predict"
PREDICT_PROBA = "predict_proba"
RECONSTRUCT = "reconstruct"
ACTION_DOCSTRINGS = {
"predict": """
This function is used by both computer vision (CV) and natural language processing (NLP) models. The NLP inputs/outputs have not been documented yet.
Used by the following models to run inference on input data:
- Image Classification
- Image Segmentation
Parameters
----------
sample:
Input(s) to run inference on. Accepts a filepath string, a PIL.Image object, or a list containing any combination of filepath strings and PIL.Image objects.
Only files types supported by PIL.Image are accepted.
timeout:
Number in seconds to wait for the inference server to be ready, if it is not already running.
Defaults to 60 seconds.
verbose:
Whether to enable more verbose logging for `wait_for_inference_server`, which waits for the inference server to be ready if it is not already running.
Defaults to False.
return_inference_id:
Return the inference request ID that can be used to lookup the inferences in the inference store if 'inference_storage' is enabled.
If 'sample' contained multiple images, the function will still only return one request ID, but the individual images can be looked by appending '-{i}' to the request ID, where i is the index of the image in 'sample'.
Defaults to False.
return_semantic_score:
Return the semantic score for the image(s). This will be a list of floats, one per input image in your batch. A value close to 1 means the image is similar to the model's training distribution, and a value close to 0 means it is dissimilar.
Returns
-------
The inference result.
- Image Classification: (List[str]) List of class labels corresponding to each input image.
- Image Segmentation: (List[List[List[int]]]) List of segmentation masks (List[List[int]]) corresponding to each input image. Segmentations mask use the class indexes which can be converted to class labels the dictionary provided by Model.inverse_class_labels.
If return_inference_id=True, output becomes a dictionary with the format `{'inference_id': 'aaaaaaaa-aaaa-1111-1111-a1a1a1a1a1a1', 'inference': <inference result>}`
Raises
-------
ActionUnsupportedByCurrentModelError:
If the models is not one of the following task types:
- Image Classification
- Image Segmentation
ApiException:
If the call to get inference server status does not return a status code that is 2XX or 404. (Thrown by `wait_for_inference_server`)
RuntimeError:
If the inference server is in an unknown state, failed to spin up, or was not able to spin up within the timeout period. (Thrown by `wait_for_inference_server`)
FileNotFoundError:
If the provided filepath does not exist.
IsADirectoryError:
If the provided filepath is a directory.
PIL.UnidentifiedImageError:
If the provided filepath is to an unsupported image file type.
""",
"predict_proba": """
Used by the following models to run inference on input data:
- Image Classification
- Image Segmentation
Parameters
----------
sample:
Input(s) to run inference on. Accepts a filepath string, a PIL.Image object, or a list containing any combination of filepath strings and PIL.Image objects.
Only files types supported by PIL.Image are accepted.
timeout:
Number in seconds to wait for the inference server to be ready, if it is not already running.
Defaults to 60 seconds.
verbose:
Whether to enable more verbose logging for `wait_for_inference_server`, which waits for the inference server to be ready if it is not already running.
Defaults to False.
return_inference_id:
Return the inference request ID that can be used to lookup the inferences in the inference store if 'inference_storage' is enabled.
If 'sample' contained multiple images, the function will still only return one request ID, but the individual images can be looked by appending '-{i}' to the request ID, where i is the index of the image in 'sample'.
Defaults to False.
return_semantic_score:
Return the semantic score for the image(s). This will be a list of floats, one per input image in your batch. A value close to 1 means the image is similar to the model's training distribution, and a value close to 0 means it is dissimilar.
Returns
-------
The inference result.
- Image Classification: (List[List[float]]) List of class probabilities for corresponding to each input image. Example parsing: prob_nth_image_is_class = response[nth_image][class_index]
- Image Segmentation: (List[List[List[List[float]]]]) List of class probability masks (List[List[List[float]]]) corresponding to each input image. Example parsing: prob_nth_image_pixel_row_width_is_class = response[nth_image][class_index][row][width]
If return_inference_id=True, output becomes a dictionary with the format `{'inference_id': 'aaaaaaaa-aaaa-1111-1111-a1a1a1a1a1a1', 'inference': <inference result>}`
Raises
-------
ActionUnsupportedByCurrentModelError:
If the models is not one of the following task types:
- Image Classification
- Image Segmentation
ApiException:
If the call to get inference server status does not return a status code that is 2XX or 404. (Thrown by `wait_for_inference_server`)
RuntimeError:
If the inference server is in an unknown state, failed to spin up, or was not able to spin up within the timeout period. (Thrown by `wait_for_inference_server`)
FileNotFoundError:
If the provided filepath does not exist.
IsADirectoryError:
If the provided filepath is a directory.
PIL.UnidentifiedImageError:
If the provided filepath is to an unsupported image file type.
""",
"detect": """
Used by the following models to run inference on input data:
- Object Detection
Parameters
----------
sample:
Input(s) to run inference on. Accepts a filepath string, a PIL.Image object, or a list containing any combination of filepath strings and PIL.Image objects.
Only files types supported by PIL.Image are accepted.
timeout:
Number in seconds to wait for the inference server to be ready, if it is not already running.
Defaults to 60 seconds.
verbose:
Whether to enable more verbose logging for `wait_for_inference_server`, which waits for the inference server to be ready if it is not already running.
Defaults to False.
return_inference_id:
Return the inference request ID that can be used to lookup the inferences in the inference store if 'inference_storage' is enabled.
If 'sample' contained multiple images, the function will still only return one request ID, but the individual images can be looked by appending '-{i}' to the request ID, where i is the index of the image in 'sample'.
Defaults to False.
return_semantic_score:
Return the semantic score for the image(s). This will be a list of floats, one per input image in your batch. A value close to 1 means the image is similar to the model's training distribution, and a value close to 0 means it is dissimilar.
score_threshold:
The score threshold under which detections are ignored and not reported.
Returns
-------
The inference result.
- Object Detection: (List[Dict[]]) List of Dictionaries corresponding to each input image, with the following format.
`{'num_detections': 0, 'detection_classes': [], 'detection_scores': [], 'detection_boxes': []}`
- num_detections: (int) Number of detections found in the image.
- detection_classes: (List[str]) Class labels of each detection.
- detection_scores: (List[float]) Score of each detection.
- detection_boxes: (List[List[float]]) List of detection bounding boxes (List[float]). Detection bounding boxes are rectangles represented as [y_min, x_min, y_max, x_max], where [0][0] is the top left of the image.
Detections are index aligned so detection 0 would have class `detection_class[0]` with a detection score of `detection_score[0]` and bounding box of `detection_boxes[0]`
If return_inference_id=True, output becomes a dictionary with the format `{'inference_id': 'aaaaaaaa-aaaa-1111-1111-a1a1a1a1a1a1', 'inference': <inference result>}`
Raises
-------
ActionUnsupportedByCurrentModelError:
If the models is not one of the following task types:
- Object Detection
ApiException:
If the call to get inference server status does not return a status code that is 2XX or 404. (Thrown by `wait_for_inference_server`)
RuntimeError:
If the inference server is in an unknown state, failed to spin up, or was not able to spin up within the timeout period. (Thrown by `wait_for_inference_server`)
FileNotFoundError:
If the provided filepath does not exist.
IsADirectoryError:
If the provided filepath is a directory.
PIL.UnidentifiedImageError:
If the provided filepath is to an unsupported image file type.
""",
"embed": """
This function is used by both computer vision (CV) and natural language processing (NLP) models. The NLP inputs/outputs have not been documented yet.
Used by the following models to run inference on input data:
- Image Classification
- Image Embedding
Parameters
----------
sample:
Input(s) to run inference on. Accepts a filepath string, a PIL.Image object, or a list containing any combination of filepath strings and PIL.Image objects.
Only files types supported by PIL.Image are accepted.
timeout:
Number in seconds to wait for the inference server to be ready, if it is not already running.
Defaults to 60 seconds.
verbose:
Whether to enable more verbose logging for `wait_for_inference_server`, which waits for the inference server to be ready if it is not already running.
Defaults to False.
return_inference_id:
Return the inference request ID that can be used to lookup the inferences in the inference store if 'inference_storage' is enabled.
If 'sample' contained multiple images, the function will still only return one request ID, but the individual images can be looked by appending '-{i}' to the request ID, where i is the index of the image in 'sample'.
Defaults to False.
Returns
-------
The inference result.
- Image Classification: (List[List[float]]) List of embeddings (List[float]) corresponding to each input image.
- Image Embedding: (List[List[float]]) List of embeddings (List[float]) corresponding to each input image.
If return_inference_id=True, output becomes a dictionary with the format `{'inference_id': 'aaaaaaaa-aaaa-1111-1111-a1a1a1a1a1a1', 'inference': <inference result>}`
Raises
-------
ActionUnsupportedByCurrentModelError:
If the models is not one of the following task types:
- Image Classification
- Image Embedding
ApiException:
If the call to get inference server status does not return a status code that is 2XX or 404. (Thrown by `wait_for_inference_server`)
RuntimeError:
If the inference server is in an unknown state, failed to spin up, or was not able to spin up within the timeout period. (Thrown by `wait_for_inference_server`)
FileNotFoundError:
If the provided filepath does not exist.
IsADirectoryError:
If the provided filepath is a directory.
PIL.UnidentifiedImageError:
If the provided filepath is to an unsupported image file type.
""",
}