Model Mixins

These classes provide the basis of adapter module integration into model classes such as adapter saving and loading. Depending on the model, one of these mixins should be implemented by every adapter-supporting model class.

InvertibleAdaptersMixin

class adapters.InvertibleAdaptersMixin

Mixin for Transformer models adding invertible adapters.

add_invertible_adapter(adapter_name: str) bool

Adds an invertible adapter module for the adapter with the given name. If the given adapter does not specify an invertible adapter config, this method does nothing.

Parameters

adapter_name (str) – The name of the adapter for which to add an invertible adapter module.

EmbeddingAdaptersMixin

class adapters.EmbeddingAdaptersMixin

Mixin for Transformer models adding support for dynamically switching embeddings.

add_embeddings(name, tokenizer, reference_embedding=None, reference_tokenizer=None, embedding_dim=None)

Add a new embedding to the model. If a reference embedding and reference tokenizer are provided tokens in the present in both tokenizers are initialized to the embedding in the reference_embedding.

Parameters
  • name – the name of the embedding

  • tokenizer – the tokenizer determining the vocab of the embedding

  • reference_embedding – the reference embedding to use for initializing the embeddings of tokens present in the newly created embedding

  • reference_tokenizer – the tokenizer providing the vocab for the reference embedding

  • embedding_dim – the dimension of the embeddings (if None the embedding_size, or if this doesn’t exist the hidden_size, from the config is used)

delete_embeddings(name)

Deletes the embedding with the given name

Parameters

name – The name of the embedding that should be deleted

load_embeddings(path: str, name: str)

Load a saved embedding from the given path. If the embedding was saved with a tokenizer it is returned

Parameters
  • path – the path to the saved embedding

  • name – the name the embedding should be loaded as

Returns: a tokenizer if it ws saved with the embedding otherwise None

save_embeddings(path, name, tokenizer=None)

Saves the embedding with the given name. If a tokenizer is passed as well the tokenizer is saved together with the embedding.

Parameters
  • path – The path where the embedding should be saved

  • name – The name of the embedding that should be saved

  • tokenizer – optionally a tokenizer to save with the embedding (default is None)

set_active_embeddings(name)

Sets the active embedding for the forward pass of the model

Parameters

name – The name of the embedding that should be used

ModelAdaptersMixin

class adapters.ModelAdaptersMixin(config, *args, **kwargs)

Mixin for transformer models adding support for loading/ saving adapters.

adapter_fusion_to(adapter_names: Union[Fuse, list, str], device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter fusion layer with the given name to the specified device and data type.

Parameters
  • adapter_names (Union[Fuse, list, str]) – The name of the adapter fusion layer to be moved.

  • device (torch.device or str, optional) – The device on which the adapter fusion layer should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter fusion layer should be cast.

adapter_summary(as_dict=False) Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

adapter_to(name: str, device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter with the given name to the specified device and data type.

Parameters
  • name (str) – The name of the adapter to be moved.

  • device (torch.device or str, optional) – The device on which the adapter should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter should be cast.

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict or AdapterConfig, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an

  • set_active (exception is thrown.) – Set the adapter to be the active one. By default (False),

  • activated. (the adapter is added but not) –

add_adapter_fusion(adapter_names: Union[Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

apply_to_basemodel_childs(fn)

Applies a function to all direct childs of the model if they are a instance of AdapterLayerBase.

average_adapter(adapter_name: str, adapter_list: List[str], weights: Optional[List[float]] = None, normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module as weighted average of a set of existing adapter modules.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • input_adapters (List[str] or Dict[str, float]) – Specifies the existing adapters whose weights should be averaged. Can either be a list of adapter names or a dictionary mapping adapter names to weights.

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

forward_context(context: ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name) dict

Returns a dictionary with all weights of the adapter with the specified name.

Parameters

name (str) – The adapter name.

Returns

A nested dictionary containing the weights of the adapter. The dictionary is structured as follow: {<layer id>: {<module location>: <nn.Module>}}. <layer id> = -1 indicates global/ shared weights.

Return type

dict

init_adapters(model_config, adapters_config, add_prefix_tuning_pool=True)

This method initializes adapter modules and fusion modules from the model config.

abstract iter_layers() Iterable[Tuple[int, Module]]

Iterates over all layers of the model.

This abstract method has to ne implemented by every implementing model.

load_adapter(adapter_name_or_path: str, config: Optional[Union[dict, str]] = None, version: Optional[str] = None, model_name: Optional[str] = None, load_as: Optional[str] = None, source: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: search on all sources

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

save_adapter(save_directory: str, adapter_name: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[Fuse, list, str], meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_adapters(save_directory: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the adapters should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

set_active_adapters(adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. If no adapter with the given name is found, no module of the respective type will be activated.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

train_adapter(adapter_setup: Union[list, AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters.

train_adapter_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

train_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

ModelWithHeadsAdaptersMixin

class adapters.ModelWithHeadsAdaptersMixin(config, *args, **kwargs)

Mixin adding support for loading/ saving adapters to transformer models with head(s).

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

get_adapter(name)

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

init_adapters(model_config, adapters_config, add_prefix_tuning_pool=True)

This method initializes adapter modules and fusion modules from the model config.

iter_layers() Iterable[Tuple[int, Module]]

Iterates over all layers of the model.

load_adapter(adapter_name_or_path: str, config: Optional[Union[dict, str]] = None, version: Optional[str] = None, model_name: Optional[str] = None, load_as: Optional[str] = None, source: Optional[str] = None, with_head: bool = True, custom_weights_loaders: Optional[List[WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: search on all sources

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, set_active: bool = False, with_head: bool = True, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

load_head(save_directory: str, load_as: Optional[str] = None, id2label: Optional[Dict[int, str]] = None, use_safetensors: bool = False, **kwargs) str

Loads a model prediction head from a directory where it was saved using save_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head is saved.

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • id2label (Dict[int, str], optional) – Provide a custom mapping from class ids to class labels. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the prediction head was added to the model.

Return type

str

save_adapter(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[Fuse, list, str], meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, with_head: Union[bool, str] = False, use_safetensors: bool = False)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

  • with_head (Union[bool, str]) – If True, will save a head with the same name as the AdapterFusionLayer. If a string, this will be used as the name of the head to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapters(save_directory: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the adapters should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_heads(save_directory: str, use_safetensors: bool = False)

Saves all prediction heads of this model to subfolders of the given location.

Parameters
  • save_directory (str) – Path to the base directory where prediction heads should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_head(save_directory: str, head_name: Optional[str] = None, use_safetensors: bool = False) None

Saves a model prediction head to a directory such that it can be reloaded using load_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head should be saved.

  • head_name (str, optional) – Name of the head to save. Set to None if model only has one head. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

train_adapter(adapter_setup: Union[list, AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

train_adapter_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

ModelWithFlexibleHeadsAdaptersMixin

class adapters.ModelWithFlexibleHeadsAdaptersMixin(*args, **kwargs)

Adds flexible prediction heads to a model class. Implemented by the XModelWithHeads classes.

property active_head: Union[str, List[str]]

The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.

Returns

A string or a list of strings describing the active head configuration.

Return type

Union[str, List[str]]

add_causal_lm_head(head_name, activation_function='gelu', overwrite_ok=False)

Adds a causal language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • activation_function (str, optional) – Activation function. Defaults to ‘gelu’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_classification_head(head_name, num_labels=2, layers=2, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)

Adds a sequence classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.

add_dependency_parsing_head(head_name, num_labels=2, overwrite_ok=False, id2label=None)

Adds a biaffine dependency parsing head on top of the model. The parsing head uses the architecture described in “Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation” (Glavaš & Vulić, 2021) (https://arxiv.org/pdf/2008.06788.pdf).

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of labels. Defaults to 2.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • id2label (dict, optional) – Mapping from label ids to labels. Defaults to None.

add_image_classification_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)

Adds an image classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.

add_masked_lm_head(head_name, activation_function='gelu', overwrite_ok=False)

Adds a masked language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • activation_function (str, optional) – Activation function. Defaults to ‘gelu’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_multiple_choice_head(head_name, num_choices=2, layers=2, activation_function='tanh', overwrite_ok=False, id2label=None, use_pooler=False)

Adds a multiple choice head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_choices (int, optional) – Number of choices. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_qa_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a question answering head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_seq2seq_lm_head(head_name, overwrite_ok=False)

Adds a sequence-to-sequence language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_tagging_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a token classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

delete_head(head_name: str)

Deletes the prediction head with the specified name from the model.

Parameters

head_name (str) – The name of the prediction to delete.

forward_head(all_outputs, head_name=None, cls_output=None, attention_mask=None, return_dict=False, context=None, **kwargs)

The forward pass through a prediction head configuration. There are three ways to specify the used prediction head configuration (in order of priority):

  1. If a head_name is passed, the head with the given name is used.

  2. If the forward call is executed within an AdapterSetup context, the head configuration is read from the context.

  3. If the active_head property is set, the head configuration is read from there.

Parameters
  • all_outputs (dict) – The outputs of the base model.

  • head_name (str, optional) – The name of the prediction head to use. If None, the active head is used.

  • cls_output (torch.Tensor, optional) – The classification output of the model.

  • attention_mask (torch.Tensor, optional) – The attention mask of the model.

  • return_dict (bool) – Whether or not to return a ModelOutput instead of a plain tuple.

  • get_cls_from_eos_tokens (bool) – If set to True, retrieve classifier token representations from the last <eos> token in the sequence. Setting to True requires eos_mask to be passed as well.

  • **kwargs – Additional keyword arguments passed to the forward pass of the head.

get_labels(head_name=None)

Returns the labels the given head is assigning/predictin

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: labels

get_labels_dict(head_name=None)

Returns the id2label dict for the given hea

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: id2label

head_type()

Checks which head type the decorated function belongs to and raises an error if the model does not support the head type.

set_active_adapters(adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

tie_weights()

Tie the weights between the input embeddings and the output embeddings.

If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

PushAdapterToHubMixin

class adapters.hub_mixin.PushAdapterToHubMixin

Mixin providing support for uploading adapters to HuggingFace’s Model Hub.

push_adapter_to_hub(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, overwrite_adapter_card: bool = False, create_pr: bool = False, revision: Optional[str] = None, commit_description: Optional[str] = None, adapter_card_kwargs: Optional[dict] = None, **deprecated_kwargs)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_name (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

  • revision (str, optional) – Branch to push the uploaded files to.

  • commit_description (str, optional) – The description of the commit that will be created

Returns

The url of the adapter repository on the model hub.

Return type

str