Skip to content

Training

training

High level orchestrator for model training

Warning

This module is incomplete. they only contain annotations for future use.

Attributes:

Name Type Description
Epoch TypeAlias

The number of times the model has seen the entire training dataset.

TrainingBatchSize TypeAlias

Number of training examples (audio chunks) processed before a weight update.

GradientAccumulationSteps TypeAlias

Number of batches to process before performing a weight update.

OptimizerName TypeAlias

Algorithm used to update the model weights to minimize the loss function.

LearningRateSchedulerName TypeAlias

Algorithm used to adjust the learning rate during training.

UseAutomaticMixedPrecision TypeAlias

Whether to use automatic mixed precision (AMP) during training.

UseLoRA TypeAlias

Whether to use Low-Rank Adaptation for efficient fine-tuning.

Epoch module-attribute

Epoch: TypeAlias = int

The number of times the model has seen the entire training dataset.

TrainingBatchSize module-attribute

TrainingBatchSize: TypeAlias = Annotated[int, Gt(0)]

Number of training examples (audio chunks) processed before a weight update.

Larger batches may offer more stable gradients but require more memory.

GradientAccumulationSteps module-attribute

GradientAccumulationSteps: TypeAlias = Annotated[int, Gt(0)]

Number of batches to process before performing a weight update.

This simulates a larger batch size without increasing memory, e.g., a batch size of 4 with 8 accumulation steps has an effective batch size of 32.

OptimizerName module-attribute

OptimizerName: TypeAlias = str

Algorithm used to update the model weights to minimize the loss function.

LearningRateSchedulerName module-attribute

LearningRateSchedulerName: TypeAlias = str

Algorithm used to adjust the learning rate during training.

e.g. ReduceLROnPlateau can reduce the learning rate when a metric stops improving.

UseAutomaticMixedPrecision module-attribute

UseAutomaticMixedPrecision: TypeAlias = bool

Whether to use automatic mixed precision (AMP) during training.

UseLoRA module-attribute

UseLoRA: TypeAlias = bool

Whether to use Low-Rank Adaptation for efficient fine-tuning.

This freezes pre-trained weights and injects smaller, trainable low-rank matrices, dramatically reducing the number of trainable parameters.