DeBERTa-v2

Overview

The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019.

It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa.

The abstract from the paper is the following:

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.

The following information is visible directly on the [original implementation repository](https://github.com/microsoft/DeBERTa). DeBERTa v2 is the second version of the DeBERTa model. It includes the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can find more details about this submission in the authors’ [blog](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/)

New in v2:

  • Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data. Instead of a GPT2-based tokenizer, the tokenizer is now [sentencepiece-based](https://github.com/google/sentencepiece) tokenizer.

  • nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first transformer layer to better learn the local dependency of input tokens.

  • Sharing position projection matrix with content projection matrix in attention layer Based on previous experiments, this can save parameters without affecting the performance.

  • Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions similar to T5.

  • 900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the performance of downstream tasks.

This model was contributed by DeBERTa. This model TF 2.0 implementation was contributed by kamalkraj. The original code can be found here.

DebertaV2AdapterModel

class adapters.DebertaV2AdapterModel(config)

Deberta v2 Model transformer with the option to add multiple flexible heads on top.

property active_adapters: AdapterCompositionBlock

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly.

For previous PEFT versions (that does not support multi-adapter inference), module.active_adapter will return a single string.

property active_head: Union[str, List[str]]

The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.

Returns

A string or a list of strings describing the active head configuration.

Return type

Union[str, List[str]]

adapter_fusion_to(adapter_names: Union[Fuse, list, str], device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter fusion layer with the given name to the specified device and data type.

Parameters
  • adapter_names (Union[Fuse, list, str]) – The name of the adapter fusion layer to be moved.

  • device (torch.device or str, optional) – The device on which the adapter fusion layer should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter fusion layer should be cast.

adapter_summary(as_dict=False) Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

adapter_to(name: str, device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter with the given name to the specified device and data type.

Parameters
  • name (str) – The name of the adapter to be moved.

  • device (torch.device or str, optional) – The device on which the adapter should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter should be cast.

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

add_adapter_fusion(adapter_names: Union[Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

add_classification_head(head_name, num_labels=2, layers=2, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)

Adds a sequence classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.

add_masked_lm_head(head_name, activation_function='gelu', layers=2, overwrite_ok=False)

Adds a masked language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • activation_function (str, optional) – Activation function. Defaults to ‘gelu’.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_multiple_choice_head(head_name, num_choices=2, layers=2, activation_function='tanh', overwrite_ok=False, id2label=None, use_pooler=False)

Adds a multiple choice head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_choices (int, optional) – Number of choices. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_qa_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a question answering head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_tagging_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a token classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

apply_to_basemodel_childs(fn)

Applies a function to all direct childs of the model if they are a instance of AdapterLayerBase.

average_adapter(adapter_name: str, adapter_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, combine_strategy: str = 'linear', normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False, svd_rank: Optional[int] = None)

Adds a new adapter module as weighted average of a set of existing adapter modules.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • adapter_list (List[str] or Dict[str, float]) – Specifies the existing adapters whose weights should be averaged. Can either be a list of adapter names or a dictionary mapping adapter names to weights.

  • weights (Optional[List[float]], optional) – The weights corresponding to each adapter module in the list. If not provided, equal weights will be assigned to each adapter.

  • combine_strategy (str, optional) – The strategy to combine the adapter modules. Available options are “linear”, “lora_linear_only_negate_b”, and “lora_delta_w_svd”. See https://docs.adapterhub.ml/adapter_composition.html#merging-adapters Defaults to “linear”.

  • normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

  • svd_rank (int, optional) – The rank to be used for Singular Value Decomposition (SVD) when averaging LoRA adapters. This parameter is only applicable when the combine_strategy is set to “lora_delta_w_svd”. Defaults to None.

average_head(head_name: str, head_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False)

Adds a new prediction head as a weighted average of a set of existing prediction heads.

Parameters
  • head_name (str) – The name of the new prediction head to be added.

  • head_list (List[str] or Dict[str, float]) – Specifies the existing heads whose weights should be averaged. Can either be a list of head names or a dictionary mapping head names to weights.

  • weights (Optional[List[float]], optional) – The weights corresponding to each head in the list. If not provided, equal weights will be assigned to each head.

  • normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.

  • overwrite_ok (bool, optional) – Overwrite a head with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the head to be the active one. By default (False), the head is added but not activated.

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

delete_head(head_name: str)

Deletes the prediction head with the specified name from the model.

Parameters

head_name (str) – The name of the prediction to delete.

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, head=None, output_adapter_gating_scores=False, output_adapter_fusion_attentions=False, **kwargs)

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

forward_context(context: ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

forward_head(all_outputs, head_name=None, cls_output=None, attention_mask=None, return_dict=False, context=None, **kwargs)

The forward pass through a prediction head configuration. There are three ways to specify the used prediction head configuration (in order of priority):

  1. If a head_name is passed, the head with the given name is used.

  2. If the forward call is executed within an AdapterSetup context, the head configuration is read from the context.

  3. If the active_head property is set, the head configuration is read from there.

Parameters
  • all_outputs (dict) – The outputs of the base model.

  • head_name (str, optional) – The name of the prediction head to use. If None, the active head is used.

  • cls_output (torch.Tensor, optional) – The classification output of the model.

  • attention_mask (torch.Tensor, optional) – The attention mask of the model.

  • return_dict (bool) – Whether or not to return a ModelOutput instead of a plain tuple.

  • get_cls_from_eos_tokens (bool) – If set to True, retrieve classifier token representations from the last <eos> token in the sequence. Setting to True requires eos_mask to be passed as well.

  • **kwargs – Additional keyword arguments passed to the forward pass of the head.

freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name)

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

get_labels(head_name=None)

Returns the labels the given head is assigning/predictin

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: labels

get_labels_dict(head_name=None)

Returns the id2label dict for the given hea

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: id2label

get_output_embeddings() Union[Module, List[Module]]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

head_type()

Checks which head type the decorated function belongs to and raises an error if the model does not support the head type.

init_adapters(model_config, adapters_config, add_prefix_tuning_pool=True)

This method initializes adapter modules and fusion modules from the model config.

iter_layers() Iterable[Tuple[int, Module]]

Iterates over all layers of the model.

load_adapter(adapter_name_or_path: str, config: Optional[Union[dict, str]] = None, version: Optional[str] = None, model_name: Optional[str] = None, load_as: Optional[str] = None, with_head: bool = True, custom_weights_loaders: Optional[List[WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – Deprecated.

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – Deprecated.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, set_active: bool = False, with_head: bool = True, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

load_head(save_directory: str, load_as: Optional[str] = None, id2label: Optional[Dict[int, str]] = None, use_safetensors: bool = False, **kwargs) str

Loads a model prediction head from a directory where it was saved using save_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head is saved.

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • id2label (Dict[int, str], optional) – Provide a custom mapping from class ids to class labels. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the prediction head was added to the model.

Return type

str

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

push_adapter_to_hub(repo_id: str, adapter_name: str, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, overwrite_adapter_card: bool = False, create_pr: bool = False, revision: Optional[str] = None, commit_description: Optional[str] = None, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_id (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

  • revision (str, optional) – Branch to push the uploaded files to.

  • commit_description (str, optional) – The description of the commit that will be created

Returns

The url of the adapter repository on the model hub.

Return type

str

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

save_adapter(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[Fuse, list, str], meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, with_head: Union[bool, str] = False, use_safetensors: bool = False)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

  • with_head (Union[bool, str]) – If True, will save a head with the same name as the AdapterFusionLayer. If a string, this will be used as the name of the head to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_adapters(save_directory: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the adapters should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_heads(save_directory: str, use_safetensors: bool = False)

Saves all prediction heads of this model to subfolders of the given location.

Parameters
  • save_directory (str) – Path to the base directory where prediction heads should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_head(save_directory: str, head_name: Optional[str] = None, use_safetensors: bool = False) None

Saves a model prediction head to a directory such that it can be reloaded using load_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head should be saved.

  • head_name (str, optional) – Name of the head to save. Set to None if model only has one head. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_pretrained(save_directory: Union[str, PathLike], **kwargs)

Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.

Parameters
  • save_directory (str or os.PathLike) – Directory to which to save. Will be created if it doesn’t exist.

  • is_main_process (bool, optional, defaults to True) – Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.

  • state_dict (nested dictionary of torch.Tensor) – The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).

  • save_function (Callable) – The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method.

  • push_to_hub (bool, optional, defaults to False) – Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).

  • max_shard_size (int or str, optional, defaults to “5GB”) –

    The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to 5GB in order for models to be able to run easily on free-tier google colab instances without CPU OOM issues.

    <Tip warning={true}>

    If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size.

    </Tip>

  • safe_serialization (bool, optional, defaults to True) – Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).

  • variant (str, optional) – If specified, weights are saved in the format pytorch_model.<variant>.bin.

  • token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

  • save_peft_format (bool, optional, defaults to True) – For backward compatibility with PEFT library, in case adapter weights are attached to the model, all keys of the state dict of adapters needs to be pre-pended with base_model.model. Advanced users can disable this behaviours by setting save_peft_format to False.

  • kwargs (Dict[str, Any], optional) – Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.

set_active_adapters(adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

tie_weights()

Tie the weights between the input embeddings and the output embeddings.

If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

train_adapter(adapter_setup: Union[list, AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

train_adapter_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion