Annotation
Chariot's annotation tools empower you to efficiently label your data, combining the strengths of both AI-driven automation and human expertise. Whether you're working with images or text, Chariot supports a wide range of annotation tasks to suit your project's needs.
Currently, Chariot supports the following annotation task types:
- Image classification
- Image segmentation
- Object detection
- Text classification
- Text generation
- Token classification
You can annotate data in two flexible ways: Use the Datum Preview for quick on-the-fly edits, or set up structured annotation tasks to manage larger collaborative labeling projects.
Ad Hoc Annotation
To get started quickly, users can add, remove, update, and review annotations directly in the Datum Preview view within the Datasets tab. This allows for quick datum filtering and annotation modification for specific images.
You can easily share individual datums with colleagues for review or a second opinion. Simply copy and paste the URL containing the specific datum ID.
Annotation Tasks
For large collaborative annotation efforts and access to advanced features, such as AI-assisted annotation, use dedicated annotation tasks. An annotation task represents a specific, managed instance of data annotation within a dataset.
Creating an Annotation Task
Navigate to your project in Chariot. Click the Annotations tab, and click the Create Annotation Task button.
When prompted, provide the following information:
- Name: Provide a name for the annotation task.
- Description: Provide a short description of the annotation task.
- Data Type: Choose between image datasets and text datasets.
- Task Type: Depending on the dataset type chosen above, specific task types will be available to select.
- Image data types allow for image classification, object detection, oriented object detection, and image segmentation task types.
- Text data types allow for text classification, token classification, and text generation task types.
Click the Next button.
Next, you will configure your task. Click Select Dataset and follow the prompts to select your target dataset.
You can optionally define Data Filters for your image data type task; these include Captured Date Range, Metadata, and Location Boundary.
Finally, you must select the labels you intend to use while annotating datums.
Click Create Task.
Once the annotation task has been created, you will be directed to the Annotation Task Detail page.
On this page, you can view the details of the task, the progress, and existing label distribution. From there, you can open the Annotation Editor and archive the task if needed by selecting the ellipsis menu and selecting Archive Task.
Annotation Editor
The Annotation Editor allows you to navigate through the datums that match the task's filters. As you work through each datum in the task, you can edit annotations, add new annotations, download individual datums, and view datum metadata. Additionally, you can enable Model Hinting, which is a feature designed to increase annotation speed and accuracy by allowing users to select models available in the Chariot Model Catalog that will pre-annotate your data.
To edit existing annotations, you can select them on the image or from the side navigation bar. From there, you can adjust the annotations on the image as needed, change the label, delete the annotation, or hide it.
To add new annotations, you can select a label from the left sidebar. Depending on the task type, you'll need to additionally mark the datum appropriately. For example, for oriented object detection, you'll use the reticule tool to draw a bounding box on the datum around the target object. You can then leverage the rotation tool on the canvas to allow for better object fitting than axis-aligned bounding boxes alone.
For Image Segmentation tasks, you can leverage Meta's Segment Anything Model (SAM2) when creating new segmentation masks. See Segment Anything Lasso Tool for more details.
Splitting Polygons
For image segmentation tasks, different entities may have been incorrectly segmented as a single entity, requiring the annotator to split them. To split a polygon, the user can select a starting point and click on the scissors icon to initiate the split. The user can then choose additional points within the polygon and select a final point to complete the split.
Annotation Review Status
In addition to adding and editing annotations, you can also update an annotation's review status. Review statuses support a variety of workflows. One common use is to ensure annotation quality through a review process. In this workflow, an annotator creates annotations, and a separate reviewer later verifies them. Data scientists can then choose to train models only on annotations that have been verified by creating Views that only contain verified annotations. The current annotation status supported are:
- Unreviewed: No review status has been set (i.e., null status).
- Verified: The annotation has been reviewed and marked as valid and correct.
- Rejected: The annotation has been reviewed and marked as incorrect; it either needs adjustment or deletion.
- Needs Review: The annotation has been flagged for further review. This could be because a human annotator is unsure about correctness of the annotation, or because it was generated by AI and requires human verification.
Datum Navigation in a Task
Once you've completed any annotation updates on a specific datum, you can hit Next to get the the next datum to be completed within the task. This will mark the datum as complete within the task. Additionally, if you're viewing a datum and would like to come back to it before marking it complete, you can click Skip. You can also click Previous to navigate back to the datums you previously viewed.
While viewing a datum within a task, that datum is locked for you. Other users will be unable to make annotation edits on that particular datum. However, once you navigate away from that datum, you release the lock. This ensures that concurrent annotators working on the same task will not overwrite each other's work.
You can exit the Annotation Editor at any point by clicking the X in the top-right corner. Your progress will be saved.
Once all the datums have been reviewed within the task, the task is considered complete and you'll be presented with the option to Finish and close task or Continue Annotating.
AI-Assisted Annotation
Model Hinting
Model hinting is a feature designed to boost annotation speed and accuracy by letting you select models from the Chariot Model Catalog to provide suggestions or pre-annotate data in the Annotation Editor.
To use this feature, click the Select a model button on the right toolbar at the bottom of the window. This will present you with all relevant models—those containing one or more class labels configured for the given annotation task.
The chosen model may take a few minutes to become available for use.
Once the selected model is ready, it will run inference in the background to generate "hints" related to the specified labels. These hints are displayed in a dedicated panel on the lower right-hand portion of the window. Hovering over the hints will render them; to use a hint, click Use. This will add the hint as an annotation that the user can further edit, refine, or remove as needed.
Segment Anything Lasso Tool
Chariot's Segment Anything lasso tool leverages Meta's Segment Anything Model (SAM2), deployed within Chariot to automatically generate segmentation masks for parts of an image. Designed exclusively for image segmentation tasks, this tool allows users to efficiently isolate and work with specific image regions, simplifying the segmentation process and enhancing precision.
Since the Segment Anything Model that powers the lasso tool requires an active Inference Server in the background that the model can run on to scale up, it might take a couple of minutes for the tool to be ready to use. Once the Inference Server is active, you will be able to use the tool without interruption for the rest of your session until there is a 15-minute period of inactivity.
Users can use the lasso tool by clicking Draw Box and drawing a box around part of the image that contains the object you wish to segment.
If the model identifies the object correctly, you will see a segmentation mask appear around it. The list of contours generated by SAM can be viewed in the lower right-hand sidebar. To use a contour as an annotation, click Use. The mask will be added as the currently selected label in the annotation panel, where users can further edit, refine, or remove it. Often, you may want to split the polygon into multiple polygons, potentially with different labels. For more details on how to do this, see the Splitting Polygons section.
Appendix
Annotation Studio (deprecated)
The Annotation Studio is deprecated, but UI documentation can still be found here.