chariot.inference_store package
Submodules
chariot.inference_store.export_task module
- chariot.inference_store.export_task.create_export_task(model_id: str, request_body: NewExportTaskRequest) ExportTask | None [source]
Create an export task to get data and inferences, and optionally annotations and metadata.
- Parameters:
model_id (str) – The model id.
request_body (models.NewExportTaskRequest) – The export task to execute.
- Returns:
The export task.
- Return type:
Optional[models.ExportTask]
- chariot.inference_store.export_task.get_export_task(model_id: str, export_task_id: str) ExportTask | None [source]
Get a export task.
- Parameters:
model_id (str) – The model id.
export_task_id (str) – The export task id.
- Returns:
The export task.
- Return type:
Optional[models.ExportTask]
chariot.inference_store.inference module
- chariot.inference_store.inference.bulk_delete_inferences(model_id: str, filter_: BaseInferenceFilter) list[str | None] [source]
Delete inferences and all associated metadata.
For example, this can be used to delete all inferences for a given snapshot/split, like this:
deleted_inference_ids = istore.bulk_delete_inferences( MY_MODEL_ID, istore.BaseInferenceFilter( metadata_filter=[ istore.MetadataFilter( key="dataset_snapshot_id", type=istore.MetadataFilterType.STRING, operator=istore.MetadataFilterOperator.EQUAL, value=MY_SNAPSHOT_ID, ), istore.MetadataFilter( key="dataset_snapshot_split", type=istore.MetadataFilterType.STRING, operator=istore.MetadataFilterOperator.EQUAL, value=MY_SNAPSHOT_SPLIT, ), ] ), )
- Parameters:
model_id (str) – The model id.
filter (models.BaseInferenceFilter) – A filter describing which inferences to delete.
- Returns:
The deleted inference ids.
- Return type:
list[str | None]
- chariot.inference_store.inference.count_inferences(model_id: str, request_body: NewGetInferencesRequest) int [source]
Get the count of all inferences for a model, optionally matching a series of filters.
- Parameters:
model_id (str) – The model id.
request_body (models.NewGetInferencesRequest) – The inference filter to apply upon the full collection of inferences stored for the model.
- Returns:
The record count.
- Return type:
int
- chariot.inference_store.inference.create_inference(model_id: str, request_body: NewInferenceStorageRequest) Inference | None [source]
Create a new inference.
- Parameters:
model_id (str) – The model id.
request_body (models.NewInferenceStorageRequest) – The inference and metadata.
- Returns:
The inference.
- Return type:
Optional[models.Inference]
- chariot.inference_store.inference.create_inference_storage_request(model_id: str, inference_id: str, data: str | bytes | None, metadata: NewInferenceAndMetadataCollection, is_protected: bool = False) Inference | None [source]
Create an inference via the API. This function will upload the supplied data to blob storage if the data field is specified.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The inference id.
data (Optional[Union[str, bytes]]) – Optionally, the data inferred upon.
metadata (models.NewInferenceAndMetadataCollection) – The inference and metadata.
is_protected (bool) – Whether the inference and its associated data should be protected from deletion by retention policy
- Returns:
The inference.
- Return type:
Optional[models.Inference]
- chariot.inference_store.inference.delete_inference(model_id: str, inference_id: str) str | None [source]
Delete an inference and all associated metadata.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The inference id.
- Returns:
The deleted inference id.
- Return type:
Optional[str]
- chariot.inference_store.inference.filter_inferences(model_id: str, request_body: NewGetInferencesRequest) list[Inference] [source]
Get inferences for a model, optionally matching a series of filters. Each record returned corresponds to an inference request response pair. Inference responses are stored in the model’s native format.
- For example, a record returned might have the following inference blob:
{“detection_boxes”: [[10, 10, 20, 20], [30, 30, 40, 40]], “detection_scores”: [0.9, 0.95], “detection_labels”: [“cat”, “dog”]}
- Parameters:
model_id (str) – The model id.
request_body (models.NewGetInferencesRequest) – The inference filter to apply upon the full collection of inferences stored for the model.
- Returns:
The collection of inferences that met the filter criteria.
- Return type:
List[models.Inference]
- chariot.inference_store.inference.get_inference(model_id: str, inference_id: str) Inference | None [source]
Get details about a single inference.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The inference id.
- Returns:
The inference.
- Return type:
Optional[models.Inference]
chariot.inference_store.metadata module
- chariot.inference_store.metadata.create_metadata(model_id: str, inference_id: str, request_body: NewExtendedMetadataRequest) Metadata | None [source]
Create new metadata for an inference.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The id of the inference.
request_body (models.NewMetadataRequest) – The metadata to attach to the inference.
- Returns:
The metadata details.
- Return type:
Optional[models.Metadata]Optional
- chariot.inference_store.metadata.delete_metadata(model_id: str, inference_id: str, key: str) str | None [source]
Delete metadata for a particular inference and key pair.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The id of the inference.
key (str) – The key of the metadata to delete.
- Returns:
The deleted metadata id.
- Return type:
Optional[str]
- chariot.inference_store.metadata.get_data_hash(data: bytes) str [source]
Get the SHA256 hexdigest of the data being inferred upon.
- Parameters:
data (bytes) – The input inference data.
- Returns:
The data hash.
- Return type:
str
- chariot.inference_store.metadata.get_metadata(model_id: str, inference_id: str) list[Metadata] [source]
Get all metadata for a particular inference.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The id of the inference.
- Returns:
The metadata details.
- Return type:
List[models.Metadata]
- chariot.inference_store.metadata.get_metadata_key_type_counts(model_id: str) MetadataKeyTypeCounts | None [source]
Get metadata key, type counts.
- Parameters:
model_id (str) – The model id.
- Returns:
The count of each metadata key, type pair.
- Return type:
Optional[models.MetadataKeyTypeCounts]
- chariot.inference_store.metadata.get_metadata_statistics(model_id: str, request_body: NewGetMetadataStatisticsRequest) MetadataStatistics | None [source]
Get metadata statistics for a particular metadata key, type pair.
- Parameters:
model_id (str) – The model id.
request_body (models.NewGetMetadataStatisticsRequest) – The metadata statistics criteria.
- Returns:
The metadata statistics
- Return type:
Optional[models.MetadataStatistics]
- chariot.inference_store.metadata.map_to_inference_store_metadata(data: Mapping[str, str | float | int | Mapping[str, Any]]) dict [source]
Map data from a dictionary into the required inference store format. All keys, types, and values must be strings.
There are a few special keys mapped to the top level inference records for advanced querying: {“data_source”, “latitude”, “longitude”}. If the key doesn’t belong to that set it’s added to extended metadata.
Valid inference store types are: string, float, int, json
- Parameters:
data (dict) – The metadata.
- Returns:
The typed metadata.
- Return type:
dict
chariot.inference_store.models module
- class chariot.inference_store.models.BaseInferenceFilter(*, inference_action_filter: str | None = None, data_hash_filter: str | None = None, data_source_filter: str | None = None, time_window_filter: TimeWindowFilter | None = None, metadata_filter: list[MetadataFilter] | None = None, location_filter: GeolocationFilter | None = None, deconstructed_inference_filter: DeconstructedInferenceFilter | None = None)[source]
Bases:
BaseModel
Helper object for filtering inferences.
- Parameters:
inference_action_filter (str) – Get inferences with a given inference action.
data_hash_filter (str) – Get inferences with a given data hash.
data_source_filter (str) – Get inferences with a given data source.
time_window_filter (models.TimeWindowFilter) – Get inferences within a time window.
metadata_filter (List[models.MetadataFilter]) – Get inferences matching the intersection of one or more metadata filters.
location_filter (models.GeolocationFilter) – Get inferences in a circular or rectangular area on the globe.
deconstructed_inference_filter (models.DeconstructedInferenceFilter) – Get inferences with certain labels or confidence scores.
- data_hash_filter: str | None
- data_source_filter: str | None
- deconstructed_inference_filter: DeconstructedInferenceFilter | None
- inference_action_filter: str | None
- location_filter: GeolocationFilter | None
- metadata_filter: list[MetadataFilter] | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- time_window_filter: TimeWindowFilter | None
- class chariot.inference_store.models.DataUpload(*, data_presigned_url: str, data_storage_key: str)[source]
Bases:
BaseModel
Defines an inference store data upload resource.
- Parameters:
data_presigned_url (str) – A presigned url to upload the inference input data to.
data_storage_key (str) – The internal storage key to the inference input data.
- data_presigned_url: str
- data_storage_key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.DeconstructedInferenceFilter(*, labels: list[str] | None = None, minimum_score: float | None = None, maximum_score: float | None = None)[source]
Bases:
BaseModel
Helper object for filtering inferences by labels and scores.
- Parameters:
labels (Optional[List[str]]) – The list of labels to filter on.
minimum_score (Optional[float]) – The minimum confidence score.
maximum_score (Optional[float]) – The maximum confidence score.
- labels: list[str] | None
- maximum_score: float | None
- minimum_score: float | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.DeleteAction(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of delete actions.
- HARD = 'hard'
- SOFT = 'soft'
- class chariot.inference_store.models.EmbeddingDistanceMetric(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of distance metrics supported by the inference store.
- COSINE = 'cosine'
- L2 = 'l2'
- NEGATIVE_DOT_PRODUCT = 'negative_dot_product'
- class chariot.inference_store.models.EmbeddingFilter(*, query_embedding: list[float], distance_metric: str | None = EmbeddingDistanceMetric.COSINE)[source]
Bases:
BaseModel
Get embeddings using a query embedding and distance metric.
- Parameters:
query_embedding (List[float]) – The query embedding to search against.
distance_metric – The distance metric operator (defaults to cosine)
- distance_metric: str | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- query_embedding: list[float]
- class chariot.inference_store.models.ExportTask(*, id: str, progress_count: int, expected_count: int, state: str, presigned_url: str | None = None)[source]
Bases:
BaseModel
Defines an inference store retentiont task resource.
- Parameters:
id (str) – The export task id.
progress_count (int) – The current number of inferences added to the archive file.
expected_count (int) – The expected number of inferences to be added to archive file.
state (str) – The export task state.
presigned_url (Optional[str]) – A presigned url to the archive if the export task is complete.
- expected_count: int
- id: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- presigned_url: str | None
- progress_count: int
- state: str
- class chariot.inference_store.models.GeoCircle(*, center: GeoPoint, radius: float)[source]
Bases:
BaseModel
Defines a circular search area using a centerpoint and radius.
- Parameters:
center (GeoPoint) – A latitude value in decimal format between -90 and 90.
radius (float) – The radius in meters to expand from the center
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- radius: float
- class chariot.inference_store.models.GeoPoint(*, latitude: float, longitude: float)[source]
Bases:
BaseModel
Defines a point on the globe.
- Parameters:
latitude (float) – A latitude value in decimal format between -90 and 90.
longitude (float) – A longitude value in decimal format between -180 and 180.
- latitude: float
- longitude: float
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.GeoRectangle(*, p1: GeoPoint, p2: GeoPoint)[source]
Bases:
BaseModel
Defines a rectangular search area using two points on the globe.
- Parameters:
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.GeolocationFilter(*, gps_coordinates_circle: GeoCircle | None = None, gps_coordinates_rectangle: GeoRectangle | None = None)[source]
Bases:
BaseModel
Defines a rectangular search area using two points on the globe.
- Parameters:
gps_coordinates_circle (models.GeoCircle) – The circular area on the globe to search for inferences.
rectangular_search_area (models.GeoRectangle) – The rectangular area on the globe to search for inferences.
- gps_coordinates_rectangle: GeoRectangle | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.Inference(*, model_id: str, inference_id: str, created_at: datetime, updated_at: datetime, inference_action: str, data: str | None = None, embedding_distance: float | None = None, metadata: list[Metadata] | None = None, data_hash: str, data_source: str | None = None, data_coordinates: GeoPoint | None = None, data_storage_key: str | None = None, presigned_url: str | None = None, is_protected: bool = False, version: str)[source]
Bases:
BaseModel
Defines an inference store inference resource.
- Parameters:
model_id (str) – The model id.
inference_id (str) – The inference id.
created_at (datetime) – A timestamp of when the inference was created
updated_at (datetime) – A timestamp of when the inference was updated
inference_action (str) – The inference action.
data (Optional[str]) – The inference data. Returned as an arbitrary string when the associated model is not registered as an embedding model.
embedding_distance (Optional[float]) – The embedding distance from a query embedding.
metadata (Optional[List[Metadata]]) – The collection of metadata associated to the inference.
data_hash (str) – The hash of the inference input data.
data_source (Optional[str]) – The data source of the inference.
data_coordinates (Optional[GeoPoint]) – A set of geospatial coordinates defining where the inference occurred.
data_storage_key (Optional[str]) – The internal data storage key for the inference input data.
presigned_url (Optional[str]) – A presigned url to download the inference input data.
is_protected (bool) – Whether the inference and its associated data should be protected from deletion by retention policy
version (str) – The inference version.
- created_at: datetime
- data: str | None
- data_hash: str
- data_source: str | None
- data_storage_key: str | None
- embedding_distance: float | None
- inference_action: str
- inference_id: str
- is_protected: bool
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_id: str
- presigned_url: str | None
- property structured_embedding: list[int | float] | None
- updated_at: datetime
- version: str
- class chariot.inference_store.models.InferenceAction(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of inference actions relative to task types supported by the inference store.
- DETECT = 'detect'
- EMBED = 'embed'
- PREDICT = 'predict'
- PREDICT_PROBA = 'predict_proba'
- class chariot.inference_store.models.Metadata(*, key: str, type: str, value: str)[source]
Bases:
BaseModel
Defines an inference store metadata resource.
- Parameters:
key (str) – The name of the metadata key.
type (str) – The metadata value type (string, int, float, json).
value (str) – The metadata value.
- key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: str
- value: str
- class chariot.inference_store.models.MetadataFilter(*, key: str, operator: MetadataFilterOperator, type: MetadataFilterType, value: str)[source]
Bases:
BaseModel
Get inferences matching a given metadata inequality/constraint.
- Parameters:
key (str) – The name of the metadata key. Specify json keys using dot notation. For example, in order to filter against the nested key “y” in {“x”: {“y”: 10, “z”: 5}} -> x.y
operator (models.MetadataFilterOperator) – The equality/inequality operator in which to compare a value against (=, !=, >, <, >=, <=, in).
type (models.MetadataFilterType) – The metadata value type (string, int, float, json.string, json.int, json.float).
value (str) – The value the operator compares against. If using the ‘in’ operator, pass a list of unquoted values using brackets and comma separation, [v1, v2, …, vn]
- static form_values_for_in_operator(values_type: MetadataFilterType, values: list[str] | list[int] | list[float]) str | None [source]
- key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- operator: MetadataFilterOperator
- type: MetadataFilterType
- value: str
- class chariot.inference_store.models.MetadataFilterOperator(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of metadata filter operators supported by the inference store.
- EQUAL = '='
- GREATER = '>'
- GREATER_OR_EQUAL = '>='
- IN = 'in'
- LESS = '<'
- LESS_OR_EQUAL = '<='
- NOT_EQUAL = '!='
- class chariot.inference_store.models.MetadataFilterType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of metadata filter types supported by the inference store.
- FLOAT = 'float'
- INT = 'int'
- JSON_FLOAT = 'json.float'
- JSON_INT = 'json.int'
- JSON_STRING = 'json.string'
- STRING = 'string'
- class chariot.inference_store.models.MetadataKeyTypeCounts(*, counts: dict[str, dict[str, int]])[source]
Bases:
BaseModel
Defines an inference store metadata key type count resource.
- Parameters:
counts (Dict[str, Dict[str, int]]) – The count of each metadata key, type pair.
- counts: dict[str, dict[str, int]]
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.MetadataStatistics(*, count: int | None = None, distribution: dict[str, int] | None = None, min: float | None = None, max: float | None = None)[source]
Bases:
BaseModel
Defines an inference store metadata statistics resource.
- Parameters:
count (int) – The number of observations given the request filters.
distribution (Dict[str, int]) – The distribution of values.
min (int) – The minimum value in the set of metadata values
max (int) – The maximum value in the set of metadata values
- count: int | None
- distribution: dict[str, int] | None
- max: float | None
- min: float | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.MetadataStatisticsType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of metadata statistics types supported by the inference store.
- FLOAT = 'float'
- INT = 'int'
- STRING = 'string'
- class chariot.inference_store.models.MetadataType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of metadata types supported by the inference store.
- FLOAT = 'float'
- INT = 'int'
- JSON = 'json'
- STRING = 'string'
- class chariot.inference_store.models.MetadataUpload(*, metadata_presigned_url: str, metadata_storage_key: str)[source]
Bases:
BaseModel
Defines an inference store metadata upload resource.
- Parameters:
metadata_presigned_url (str) – A presigned url to upload the inference and metadata to.
metadata_storage_key (str) – The internal storage key to the inference and metadata.
- metadata_presigned_url: str
- metadata_storage_key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.Model(*, model_id: str, created_at: datetime, updated_at: datetime, embedding_size: int)[source]
Bases:
BaseModel
Defines an inference store model resource.
- Parameters:
model_id (str) – The model id.
created_at (datetime) – A timestamp of when the model was created
updated_at (datetime) – A timestamp of when the model was updated
embedding_size (int) – The dimension of the embeddings produced.
- created_at: datetime
- embedding_size: int
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_id: str
- updated_at: datetime
- class chariot.inference_store.models.NewExportTaskRequest(*, filters: ~chariot.inference_store.models.BaseInferenceFilter = BaseInferenceFilter(inference_action_filter=None, data_hash_filter=None, data_source_filter=None, time_window_filter=None, metadata_filter=None, location_filter=None, deconstructed_inference_filter=None), include_inference_ids: list[str] = <factory>, exclude_inference_ids: list[str] = <factory>, include_inferences_as_annotations: bool = False, include_custom_metadata: bool = False)[source]
Bases:
BaseModel
Helper object to create a retention task.
- Parameters:
filters (models.BaseInferenceFilter) – A collection of filters.
include_inference_ids (list[str]) – The list of additional inference ids to include.
exclude_inference_ids (list[str]) – The list of additional inference ids to exclude.
include_inferences_as_annotations (bool) – Whether to include the inferences as annotations (only eligible for version=v2 inferences)”.
include_custom_metadata (bool) – Whether to include custom metadata attached to the inference.
- exclude_inference_ids: list[str]
- filters: BaseInferenceFilter
- include_custom_metadata: bool
- include_inference_ids: list[str]
- include_inferences_as_annotations: bool
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.NewExtendedMetadataRequest(*, key: str, type: MetadataType, value: str)[source]
Bases:
BaseModel
Helper object to attach extended/custom metadata to an inference.
- Parameters:
key (str) – The name of the metadata key.
type (models.MetadataType) – The metadata value type (string, int, float, json).
value (str) – The metadata value.
- key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: MetadataType
- value: str
- class chariot.inference_store.models.NewGetInferencesRequest(*, filters: BaseInferenceFilter = BaseInferenceFilter(inference_action_filter=None, data_hash_filter=None, data_source_filter=None, time_window_filter=None, metadata_filter=None, location_filter=None, deconstructed_inference_filter=None), embedding_filters: EmbeddingFilter | None = None, pagination: Pagination | None = None, presign: bool = False)[source]
Bases:
BaseModel
Get inferences matching a series of filters.
- Parameters:
filters (models.BaseInferenceFilter) – A collection of filters.
embedding_filters (models.EmbeddingFilter) – An extended set of filters for embedding models.
pagination (models.Pagination) – Get inferences matching pagination constraints.
presign (bool) – Whether to presign data_storage_key(s).
- embedding_filters: EmbeddingFilter | None
- filters: BaseInferenceFilter
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- pagination: Pagination | None
- presign: bool
- class chariot.inference_store.models.NewGetMetadataStatisticsRequest(*, filters: BaseInferenceFilter | None = None, key: str, type: MetadataStatisticsType, distribution_bin_count: int | None = None, distribution_bin_width: float | None = None, distribution_maximum_value: float | None = None, distribution_minimum_value: float | None = None)[source]
Bases:
BaseModel
Helper object to get metadata statistics
- Parameters:
filters (models.BaseInferenceFilter) – A collection of filters.
key (str) – Filter by metadata key.
type (models.MetadataStatisticsType) – Filter by metadata type.
distribution_bin_count (int) – Number of bins in the distribution. Defaults to producing 10 bins if not specified.
distribution_bin_width (float) – Width of a bin within the distribution. Both count and width cannot be specified jointly.
distribution_minimum_value (float) – The minimum value in which binning begins. Defaults to the minimum value within the time window specified.
distribution_maximum_value (float) – The maximum value in which binning ends. Defaults to the maximum value within the time window specified.
- distribution_bin_count: int | None
- distribution_bin_width: float | None
- distribution_maximum_value: float | None
- distribution_minimum_value: float | None
- filters: BaseInferenceFilter | None
- key: str
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- type: MetadataStatisticsType
- class chariot.inference_store.models.NewGetRetentionTasksRequest(*, retention_policy_id: str | None = None, state_filter: RetentionTaskState | None = None, time_window_filter: TimeWindowFilter | None = None, pagination: Pagination | None = None)[source]
Bases:
BaseModel
Helper object to filter retention tasks.
- Parameters:
retention_policy_id (str) – Filter by retention policy id.
state_filter (models.RetentionTaskState) – Filter by retention task state.
time_window_filter (models.TimeWindowFilter) – Get retention tasks within a time window.
pagination (models.Pagination) – Get retention tasks matching pagination constraints.
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- pagination: Pagination | None
- retention_policy_id: str | None
- state_filter: RetentionTaskState | None
- time_window_filter: TimeWindowFilter | None
- class chariot.inference_store.models.NewInferenceAndMetadataCollection(*, image_classification: list[dict] = None, object_detection: list[dict] = None, image_segmentation: list[dict] = None, oriented_object_detection: list[dict] = None, embedding: list[float] = None, metadata: NewMetadataCollectionRequest)[source]
Bases:
BaseModel
Helper object to collect inference, standard and extended metadata. The inference-store stores the inference in whatever format the model/inference server natively returns. In order to facilitate label search, filter by confidence score, and standardize region/geometry related elements, the deconstructed_inference field must be specified. If not specified, the inference will still be stored, but the inference will not be retrievable by the before mentioned filters.
For example, an object detection inference might be returned in the following format:
{ "detection_boxes": [[10.0, 10.0, 20.0, 20.0], [30.0, 30.0, 40.0, 40.0]], "detection_classes": ["car", "truck"], "detection_scores": [0.9, 0.95] }
Another, equally well trained model might return a very similar result but in the following format:
[ {"bounding_box": [10.0, 10.0, 20.0, 20.0], "score": 0.9, "label": "car"} {"bounding_box": [30.0, 30.0, 40.0, 40.0], "score": 0.95, "label": "truck"} ]
Thus, a standard task-type conditional structure is needed so models with different output formats can speak the same language.
Call the encode method for the ability to upload the corresponding json data to blob storage.
- Parameters:
image_classification (List[dict]) – The collection of image classification inferences
object_detection (List[dict]) – The collection of object detection inferences
image_segmentation (List[dict]) – The collection of image segmentation inferences
oriented_object_detection (List[dict]) – The collection of oriented object detection inferences
embedding (List[float]) – A embedding
metadata (models.NewMetadataCollectionRequest) – The collection of metadata associated to the inference.
- embedding: list[float]
- image_classification: list[dict]
- image_segmentation: list[dict]
- metadata: NewMetadataCollectionRequest
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- object_detection: list[dict]
- oriented_object_detection: list[dict]
- class chariot.inference_store.models.NewInferenceStorageRequest(*, model_id: str, inference_id: str, data: NewInferenceAndMetadataCollection, data_storage_key: str | None = None, is_protected: bool = False)[source]
Bases:
BaseModel
Helper object to store an inference.
- Parameters:
model_id (str) – The id of the model.
inference_id (str) – The id of the inference.
data (models.NewInferenceAndMetadataCollection) – The collection of inference and metadata.
data_storage_key (str) – An optional data storage key returned from the upload.upload_data function.
is_protected (bool) – Whether the inference and its associated data should be protected from deletion by retention policy
- data_storage_key: str | None
- inference_id: str
- is_protected: bool
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_id: str
- class chariot.inference_store.models.NewMetadataCollectionRequest(*, standard_metadata: NewStandardMetadataRequest, extended_metadata: list[NewExtendedMetadataRequest] = [])[source]
Bases:
BaseModel
Helper object to collect standard and extended metadata to be associated to an inference. Call _encode_ for the ability to upload the corresponding json data to blob storage.
- Parameters:
standard_metadata (models.NewStandardMetadataRequest) – The standard metadata.
extended_metadata (List[models.NewExtendedMetadataRequest]) – The extended metadata. Defaults to an empty list.
- extended_metadata: list[NewExtendedMetadataRequest]
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- standard_metadata: NewStandardMetadataRequest
- class chariot.inference_store.models.NewRegisterModelRequest(*, model_id: str, project_id: str, task_type: TaskType, embedding_size: int = 0)[source]
Bases:
BaseModel
Helper object to register a model for inference storage.
- Parameters:
model_id (str) – The model id.
project_id (str) – The project id.
task_type (TaskType) – The model task type.
embedding_size (int) – The dimension of the embeddings produced.
- embedding_size: int
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_id: str
- project_id: str
- class chariot.inference_store.models.NewRetentionPolicyRequest(*, automated_interval: int, maximum_record_age: int, maximum_blob_age: int, delete_action: DeleteAction = DeleteAction.SOFT)[source]
Bases:
BaseModel
Helper object to create a retention policy. A value of -1 for maximum_record_age indicates that inferences should never be deleted. The maximum_blob_age must be greater than or equal to 0. The maximum_blob_age must also be equal to or less than the maximum_record_age.
- Parameters:
automated_interval (int) – Interval to automatically run retention policy in hours. Set to 0 if manual executions are desired.
maximum_record_age (int) – The maximum age (in hours) since now in which an inference and its associated data is safe from deletion.
maximum_blob_age (int) – The maximum age (in hours) since now in which a blob is safe from deletion.
delete_action (models.DeleteAction) – Whether to ‘soft’ or ‘hard’ delete the database records
- automated_interval: int
- delete_action: DeleteAction
- maximum_blob_age: int
- maximum_record_age: int
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.NewRetentionTaskRequest(*, dry_run: bool, retention_policy_id: str)[source]
Bases:
BaseModel
Helper object to create a retention task.
- Parameters:
dry_run (bool) – If true, returns the number of inferences that would be deleted if the retention policy was fully executed.
retention_policy_id (str) – The id of the retention policy to run.
- dry_run: bool
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- retention_policy_id: str
- class chariot.inference_store.models.NewStandardMetadataRequest(*, data_hash: str, data_size: int, task_type: TaskType, inference_action: InferenceAction, data_source: str | None = None, latitude: float | None = None, longitude: float | None = None)[source]
Bases:
BaseModel
Helper object to attach standard metadata to an inference.
- Currently supported task type, inference action pairs:
Image Classification: [predict, predict_proba, embed] Object Detection: [detect] Image Segmentation: [predict, predict_proba] Image Embedding = [embed]
- Parameters:
data_hash (str) – The SHA256 hexdigest of the data inferred upon. Use the helper function get_data_hash if necessary.
data_size (int) – The size of the input data.
task_type (models.TaskType) – The task type of the model.
inference_action (models.InferenceAction) – The inference action passed to the inference server.
data_source (Optional[str]) – An optional field to express the origin of the inference data.
latitude (Optional[float]) – An optional field to store the latitude the inference data was captured at.
longitude (Optional[float]) – An optional field to store the longitude the inference data was captured at.
- data_hash: str
- data_size: int
- data_source: str | None
- inference_action: InferenceAction
- latitude: float | None
- longitude: float | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class chariot.inference_store.models.Pagination(*, direction: PaginationSortDirection | None = None, limit: int | None = None, offset: int | None = None, sort: PaginationSortField | PaginationEmbeddingSortField | PaginationGranularInferenceSortField | None = None)[source]
Bases:
BaseModel
Get inferences matching pagination constraints.
- Parameters:
direction (models.PaginationSortDirection) – Direction in which to sort the ‘sort’ field (defaults to asc).
limit (int) – Limit the number of inferences returned (defaults to 50).
offset (int) – Offset the starting position of inferences to be returned (defaults to 0).
sort (Union[models.PaginationSortField, models.PaginationEmbeddingSortField, models.PaginationGranularInferenceSortField]) – The column to sort by (defaults to created_at).
- direction: PaginationSortDirection | None
- limit: int | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': (), 'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- offset: int | None
- class chariot.inference_store.models.PaginationEmbeddingSortField(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
Defines the sort field options for embedding models.
- CREATED_AT = 'created_at'
- EMBEDDING_DISTANCE = 'embedding_distance'
- class chariot.inference_store.models.PaginationGranularInferenceSortField(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
Defines the sort field options for granular inferences.
- CREATED_AT = 'created_at'
- LABEL = 'label'
- SCORE = 'score'
- class chariot.inference_store.models.PaginationSortDirection(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
Defines the sort direction options.
- ASCENDING = 'asc'
- DESCENDING = 'desc'
- class chariot.inference_store.models.PaginationSortField(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
Defines the sort field options.
- CREATED_AT = 'created_at'
- class chariot.inference_store.models.RetentionPolicy(*, id: str, model_id: str, delete_action: str, maximum_record_age: int, maximum_blob_age: int, automated_interval: int | None = None, last_scheduled_at: datetime | None = None)[source]
Bases:
BaseModel
Defines an inference store retention policy resource.
- Parameters:
id (str) – The retention policy id.
model_id (str) – The model id.
delete_action (bool) – Whether records are soft deleted or hard deleted.
maximum_record_age (int) – The maximum age (in hours) in which an inference and its associated data is safe from deletion.
maximum_blob_age (int) – The maximum age (in hours) in which a blob is safe from deletion.
automated_interval (int) – The interval (in hours) in which this retention policy will be automatically run.
last_scheduled_at (datetime) – The last time a retention task for this retention policy was run.
- automated_interval: int | None
- delete_action: str
- id: str
- last_scheduled_at: datetime | None
- maximum_blob_age: int
- maximum_record_age: int
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_id: str
- class chariot.inference_store.models.RetentionTask(*, id: str, retention_policy_id: str, deleted_record_count: int, deleted_blob_count: int, total_record_count: int, total_blob_count: int, state: str, maximum_record_age_limit: datetime, maximum_blob_age_limit: datetime)[source]
Bases:
BaseModel
Defines an inference store retentiont task resource.
- Parameters:
id (str) – The retention task id.
retention_policy_id (str) – The retention policy id.
deleted_record_count (int) – The count of all records currently deleted.
deleted_blob_count (int) – The count of all blobs currently deleted.
total_record_count (int) – The expected total count of all records to be deleted.
total_blob_count (int) – The expected total count of all blobs to be deleted.
state (str) – The state of the retention task.
maximum_record_age_limit (datetime) – The timestamp in which records are safe from deletion.
maximum_blob_age_limit (datetime) – The timestamp in which blobs are safe from deletion.
- deleted_blob_count: int
- deleted_record_count: int
- id: str
- maximum_blob_age_limit: datetime
- maximum_record_age_limit: datetime
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- retention_policy_id: str
- state: str
- total_blob_count: int
- total_record_count: int
- class chariot.inference_store.models.RetentionTaskState(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
Defines the retention task state options.
- COMPLETE = 'complete'
- FAILED = 'failed'
- PENDING = 'pending'
- RUNNING = 'running'
- SCHEDULED = 'scheduled'
- STOPPED = 'stopped'
- class chariot.inference_store.models.TaskType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
str
,Enum
The selection of task types supported by the inference store.
- IMAGE_CLASSIFICATION = 'Image Classification'
- IMAGE_EMBEDDING = 'Image Embedding'
- IMAGE_SEGMENTATION = 'Image Segmentation'
- OBJECT_DETECTION = 'Object Detection'
- ORIENTED_OBJECT_DETECTION = 'Oriented Object Detection'
- TEXT_EMBEDDING = 'Text Embedding'
- class chariot.inference_store.models.TimeWindowFilter(*, start: str | None = None, end: str | None = None)[source]
Bases:
BaseModel
Get inferences within a time window expressed in the following datetime format: YYYY-MM-DDTHH:MM:SSZ. To format an existing datetime object: dt.strftime(“%Y-%m-%dT%H:%M:%S.%fZ”)
- Parameters:
start (Optional[str]) – The left time bound. Defaults to 1970-01-01T00:00:00Z if not supplied.
end (Optional[str]) – The right time bound: Defaults to now if not supplied.
- end: str | None
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- start: str | None
chariot.inference_store.register module
- chariot.inference_store.register.get_registered_model(model_id: str) Model | None [source]
Get details about a model registered for inference storage.
- Parameters:
model_id (str) – The model id.
- Returns:
The model details.
- Return type:
Optional[models.Model]
- chariot.inference_store.register.register_model_for_inference_storage(request_body: NewRegisterModelRequest) str | None [source]
Register a model for inference storage.
- Parameters:
request_body (models.NewRegisterModelRequest) – The model to register for inference storage.
- Returns:
The model details.
- Return type:
Optional[str]
chariot.inference_store.retention_policy module
- chariot.inference_store.retention_policy.create_retention_policy(model_id: str, request_body: NewRetentionPolicyRequest) RetentionPolicy | None [source]
Create a new retention policy for a particular model.
- Parameters:
model_id (str) – The model id.
request_body (models.NewRetentionPolicyRequest) – The retention policy to attach to the model.
- Returns:
Retention policy details.
- Return type:
Optional[models.RetentionPolicy]
- chariot.inference_store.retention_policy.delete_retention_policy(model_id: str, retention_policy_id: str) str | None [source]
Delete a retention policy for a model.
- Parameters:
model_id (str) – The model id.
retention_policy_id (str) – The retention policy id.
- Returns:
The deleted retention policy id.
- Return type:
Optional[str]
- chariot.inference_store.retention_policy.get_retention_policy(model_id: str) RetentionPolicy | None [source]
Get the current retention policy for a model.
- Parameters:
model_id (str) – The model id.
- Returns:
Retention policy details.
- Return type:
Optional[models.RetentionPolicy]
chariot.inference_store.retention_task module
- chariot.inference_store.retention_task.create_retention_task(model_id: str, request_body: NewRetentionTaskRequest) RetentionTask | None [source]
Create a retention task associated with a non-automated retention policy.
- Parameters:
model_id (str) – The model id.
request_body (models.NewRetentionTaskRequest) – The retention policy to execute.
- Returns:
The retention task.
- Return type:
Optional[models.RetentionTask]
- chariot.inference_store.retention_task.filter_retention_tasks(model_id: str, request_body: NewGetRetentionTasksRequest) list[RetentionTask] [source]
Filter retention tasks.
- Parameters:
model_id (str) – The model id.
request_body (models.NewGetRetentionTasksRequest) – The retention task filter.
- Returns:
The collection of retention tasks that meet the filter criteria.
- Return type:
List[models.RetentionTask]
- chariot.inference_store.retention_task.get_retention_task(model_id: str, retention_task_id: str) RetentionTask | None [source]
Get a retention task.
- Parameters:
model_id (str) – The model id.
retention_task_id (str) – The retention task id.
- Returns:
The retention task.
- Return type:
Optional[models.RetentionTask]
chariot.inference_store.upload module
- chariot.inference_store.upload.upload_data(model_id: str) DataUpload | None [source]
Get an internal storage key and presigned url in which to upload data to.
- Parameters:
model_id (str) – The model id.
- Returns:
The upload data details.
- Return type:
Optional[models.DataUpload]
- chariot.inference_store.upload.upload_inference_and_metadata(model_id: str) MetadataUpload | None [source]
Get an internal storage key and presigned url in which to upload metadata to.
- Parameters:
model_id (str) – The model id.
- Returns:
The upload metadata details.
- Return type:
Optional[models.MetadataUpload]