Adapter Training

Classes and methods related to training adapters.

class adapters.training.AdapterArguments(train_adapter: bool = False, load_adapter: Optional[str] = '', adapter_config: Optional[str] = 'seq_bn', load_lang_adapter: Optional[str] = None, lang_adapter_config: Optional[str] = None)

The subset of arguments related to adapter training.

Parameters
  • train_adapter (bool) – Whether to train an adapter instead of the full model.

  • load_adapter (str) – Pre-trained adapter module to be loaded from Hub.

  • adapter_config (str) – Adapter configuration. Either a config string or a path to a file.

  • load_lang_adapter (str) – Pre-trained language adapter module to be loaded from Hub.

  • lang_adapter_config (str) – Language adapter configuration. Either an identifier or a path to a file.

adapters.training.setup_adapter_training(model, adapter_args: AdapterArguments, adapter_name: str, adapter_config_kwargs: Optional[dict] = None, adapter_load_kwargs: Optional[dict] = None)

Setup model for adapter training based on given adapter arguments.

Parameters
  • model (_type_) – The model instance to be trained.

  • adapter_args (AdapterArguments) – The adapter arguments used for configuration.

  • adapter_name (str) – The name of the adapter to be added.

Returns

A tuple containing the names of the loaded adapters.

Return type

Tuple[str, str]

class adapters.trainer.AdapterTrainer(model: Optional[Union[PreTrainedModel, Module]] = None, args: Optional[TrainingArguments] = None, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, model_init: Optional[Callable[[], PreTrainedModel]] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, callbacks: Optional[List[TrainerCallback]] = None, adapter_names: Optional[List[List[str]]] = None, optimizers: Tuple[Optimizer, LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[Callable[[Tensor, Tensor], Tensor]] = None)
create_optimizer()

Setup the optimizer.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method in a subclass.

class adapters.trainer.AdapterTrainerCallback(trainer)
on_step_end(args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs)

Event called at the end of a training step. If using gradient accumulation, one training step might take several inputs.

on_train_begin(args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs)

Event called at the beginning of training.

class adapters.trainer.Seq2SeqAdapterTrainer(model: Optional[Union[PreTrainedModel, Module]] = None, args: Optional[TrainingArguments] = None, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, model_init: Optional[Callable[[], PreTrainedModel]] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, callbacks: Optional[List[TrainerCallback]] = None, adapter_names: Optional[List[List[str]]] = None, optimizers: Tuple[Optimizer, LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[Callable[[Tensor, Tensor], Tensor]] = None)