Skip to main content

Train Models

Chariot provides a codeless environment for deep learning training, so anyone from curious novices to experienced data scientists can train models without worrying about writing code, provisioning servers, or installing software.

Currently, the following computer vision tasks are supported:

  • Image classification
  • Object detection
  • Oriented object detection
  • Image segmentation

The following natural language processing (NLP) tasks are supported:

  • Text classification
  • Token classification

A Training Blueprint defines a training loop in Chariot. Primarily, each Blueprint is defined by a container image with the training loop code, and a configuration schema to specify the format of inputs, hyperparameters, and configurations for the training run.

Two Blueprints come bundled with Chariot: teddy and teddy_wizard. These Blueprints are developed by Striveworks and support various machine learning tasks including image classification, image segmentation, object detection, text classification, and token classification. The Blueprint images for teddy and teddy_wizard are written in Python and heavily utilize PyTorch.

Users can also create their own Blueprints based on any popular or custom machine learning framework. Once the Blueprint is registered with Chariot, the platform enables users to train models based on their Blueprint. Blueprint authors can utilize the Blueprint Toolkit Python library to interact with dataset management, training checkpoint and metrics tracking, and other functionality in the Chariot platform.

Chariot’s model training functions are primarily built on top of PyTorch but also support additional training frameworks. For further information regarding its ecosystem of models and descriptions of the model architectures, navigate to the PyTorch Document Library. For more information on choosing the best architecture, check out our Data Science Fundamentals guide on neural network architecture. We also provide a complete list of supported models, sizes (number of parameters), depth (number of layers), and their disk footprint (size, in MB of a checkpoint) in the appendix.