PLBART

The PLBART model was proposed in [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task on Java, Python and English.

According to the abstract,

  • PLBART is a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks

  • PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding.

  • PLBART learns program syntax, style (e.g., identifier naming convention) and logical flow.

PLBartAdapterModel

class adapters.PLBartAdapterModel(config: PLBartConfig, **kwargs)

PLBART Model with the option to add multiple flexible prediction heads on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config ([PLBartConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.

property active_adapters: AdapterCompositionBlock

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly.

For previous PEFT versions (that does not support multi-adapter inference), module.active_adapter will return a single string.

property active_head: Union[str, List[str]]

The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.

Returns

A string or a list of strings describing the active head configuration.

Return type

Union[str, List[str]]

adapter_fusion_to(adapter_names: Union[Fuse, list, str], device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter fusion layer with the given name to the specified device and data type.

Parameters
  • adapter_names (Union[Fuse, list, str]) – The name of the adapter fusion layer to be moved.

  • device (torch.device or str, optional) – The device on which the adapter fusion layer should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter fusion layer should be cast.

adapter_summary(as_dict=False) Union[str, dict]

Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:

  • name: the name of the adapter

  • architecture: the architectural base of the adapter

  • #param: the number of parameters of the adapter

  • %param: the number of parameters of the adapter relative to the full model

  • active: whether the adapter is active

  • train: whether the adapter weights are enabled for training

adapter_to(name: str, device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)

Moves the adapter with the given name to the specified device and data type.

Parameters
  • name (str) – The name of the adapter to be moved.

  • device (torch.device or str, optional) – The device on which the adapter should be moved.

  • dtype (torch.dtype, optional) – The data type to which the adapter should be cast.

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

add_adapter_fusion(adapter_names: Union[Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names (Fuse or list or str) –

    AdapterFusion layer to add. Can be either:

    • a Fuse composition block

    • a list of adapter names to fuse

    • a comma-separated string of adapter names to fuse

  • config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

add_classification_head(head_name, num_labels=2, layers=2, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)

Adds a sequence classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.

add_memory_hooks()

Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.

Increase in memory consumption is stored in a mem_rss_diff attribute for each module and can be reset to zero with model.reset_memory_hooks_state().

add_model_tags(tags: Union[List[str], str]) None

Add custom tags into the model that gets pushed to the Hugging Face Hub. Will not overwrite existing tags in the model.

Parameters

tags (Union[List[str], str]) – The desired tags to inject in the model

Examples:

```python from transformers import AutoModel

model = AutoModel.from_pretrained(“google-bert/bert-base-cased”)

model.add_model_tags([“custom”, “custom-bert”])

# Push the model to your namespace with the name “my-custom-bert”. model.push_to_hub(“my-custom-bert”) ```

add_module(name: str, module: Optional[Module]) None

Add a child module to the current module.

The module can be accessed as an attribute using the given name.

Parameters
  • name (str) – name of the child module. The child module can be accessed from this module using the given name

  • module (Module) – child module to be added to the module.

add_qa_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a question answering head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_seq2seq_lm_head(head_name, layers=1, overwrite_ok=False)

Adds a sequence-to-sequence language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

apply(fn: Callable[[Module], None]) T

Apply fn recursively to every submodule (as returned by .children()) as well as self.

Typical use includes initializing the parameters of a model (see also nn-init-doc).

Parameters

fn (Module -> None) – function to be applied to each submodule

Returns

self

Return type

Module

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
apply_to_adapter_layers(fn)

Applies a function to all adapter layers of the model.

apply_to_basemodel_childs(fn)

Applies a function to all direct childs of the model if they are a instance of AdapterLayerBase.

average_adapter(adapter_name: str, adapter_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, combine_strategy: str = 'linear', normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False, svd_rank: Optional[int] = None)

Adds a new adapter module as weighted average of a set of existing adapter modules.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • adapter_list (List[str] or Dict[str, float]) – Specifies the existing adapters whose weights should be averaged. Can either be a list of adapter names or a dictionary mapping adapter names to weights.

  • weights (Optional[List[float]], optional) – The weights corresponding to each adapter module in the list. If not provided, equal weights will be assigned to each adapter.

  • combine_strategy (str, optional) – The strategy to combine the adapter modules. Available options are “linear”, “lora_linear_only_negate_b”, and “lora_delta_w_svd”. See https://docs.adapterhub.ml/adapter_composition.html#merging-adapters Defaults to “linear”.

  • normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

  • svd_rank (int, optional) – The rank to be used for Singular Value Decomposition (SVD) when averaging LoRA adapters. This parameter is only applicable when the combine_strategy is set to “lora_delta_w_svd”. Defaults to None.

average_head(head_name: str, head_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False)

Adds a new prediction head as a weighted average of a set of existing prediction heads.

Parameters
  • head_name (str) – The name of the new prediction head to be added.

  • head_list (List[str] or Dict[str, float]) – Specifies the existing heads whose weights should be averaged. Can either be a list of head names or a dictionary mapping head names to weights.

  • weights (Optional[List[float]], optional) – The weights corresponding to each head in the list. If not provided, equal weights will be assigned to each head.

  • normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.

  • overwrite_ok (bool, optional) – Overwrite a head with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the head to be the active one. By default (False), the head is added but not activated.

property base_model: Module

The main body of the model.

Type

torch.nn.Module

bfloat16() T

Casts all floating point parameters and buffers to bfloat16 datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

buffers(recurse: bool = True) Iterator[Tensor]

Return an iterator over module buffers.

Parameters

recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

torch.Tensor – module buffer

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
classmethod can_generate() bool

Returns whether this model can generate sequences with .generate().

Returns

Whether this model can generate sequences with .generate().

Return type

bool

children() Iterator[Module]

Return an iterator over immediate children modules.

Yields

Module – a child module

compile(*args, **kwargs)

Compile this Module’s forward using torch.compile().

This Module’s __call__ method is compiled and all arguments are passed as-is to torch.compile().

See torch.compile() for details on the arguments for this function.

compute_transition_scores(sequences: Tensor, scores: Tuple[Tensor], beam_indices: Optional[Tensor] = None, normalize_logits: bool = False) Tensor

Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time.

Parameters
  • sequences (torch.LongTensor) – The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.

  • scores (tuple(torch.FloatTensor)) – Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size).

  • beam_indices (torch.LongTensor, optional) – Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length). Only required if a num_beams>1 at generate-time.

  • normalize_logits (bool, optional, defaults to False) – Whether to normalize the logits (which, for legacy reasons, may be unnormalized).

Returns

A torch.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing

the transition scores (logits)

Return type

torch.Tensor

Examples:

```python >>> from transformers import GPT2Tokenizer, AutoModelForCausalLM >>> import numpy as np

>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer.pad_token_id = tokenizer.eos_token_id
>>> inputs = tokenizer(["Today is"], return_tensors="pt")
>>> # Example 1: Print the scores for each token generated with Greedy Search
>>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True)
>>> transition_scores = model.compute_transition_scores(
...     outputs.sequences, outputs.scores, normalize_logits=True
... )
>>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for
>>> # encoder-decoder models, like BART or T5.
>>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1]
>>> generated_tokens = outputs.sequences[:, input_length:]
>>> for tok, score in zip(generated_tokens[0], transition_scores[0]):
...     # | token | token string | log probability | probability
...     print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
|   262 |  the     | -1.414 | 24.33%
|  1110 |  day     | -2.609 | 7.36%
|   618 |  when    | -2.010 | 13.40%
|   356 |  we      | -1.859 | 15.58%
|   460 |  can     | -2.508 | 8.14%
>>> # Example 2: Reconstruct the sequence scores from Beam Search
>>> outputs = model.generate(
...     **inputs,
...     max_new_tokens=5,
...     num_beams=4,
...     num_return_sequences=4,
...     return_dict_in_generate=True,
...     output_scores=True,
... )
>>> transition_scores = model.compute_transition_scores(
...     outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False
... )
>>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores.
>>> # Tip 1: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the
>>> # use case, you might want to recompute it with `normalize_logits=True`.
>>> # Tip 2: the output length does NOT include the input length
>>> output_length = np.sum(transition_scores.numpy() < 0, axis=1)
>>> length_penalty = model.generation_config.length_penalty
>>> reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty)
>>> print(np.allclose(outputs.sequences_scores, reconstructed_scores))
True
```
config_class

alias of PLBartConfig

cpu() T

Move all model parameters and buffers to the CPU.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

cuda(device: Optional[Union[int, device]] = None) T

Move all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[Fuse, list, str])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.

delete_head(head_name: str)

Deletes the prediction head with the specified name from the model.

Parameters

head_name (str) – The name of the prediction to delete.

dequantize()

Potentially dequantize the model in case it has been quantized by a quantization method that support dequantization.

property device: device

The device on which the module is (assuming that all the module parameters are on the same device).

Type

torch.device

disable_adapters() None

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Disable all adapters that are attached to the model. This leads to inferring with the base model only.

disable_input_require_grads()

Removes the _require_grads_hook.

double() T

Casts all floating point parameters and buffers to double datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

property dtype: dtype

The dtype of the module (assuming that all the module parameters have the same dtype).

Type

torch.dtype

property dummy_inputs: Dict[str, Tensor]

Dummy inputs to do a forward pass in the network.

Type

Dict[str, torch.Tensor]

eject_prefix_tuning(name: str)

Converts the prefix tuning with the given name from the reparameterized form into the flat form.

Parameters

name (str) – The name of the prefix tuning.

enable_adapters() None

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Enable adapters that are attached to the model. The model will use self.active_adapter()

enable_input_require_grads()

Enables the gradients for the input embeddings. This is useful for fine-tuning adapter weights while keeping the model weights fixed.

estimate_tokens(input_dict: Dict[str, Union[Tensor, Any]]) int

Helper function to estimate the total number of tokens from the model inputs.

Parameters

inputs (dict) – The model inputs.

Returns

The total number of tokens.

Return type

int

eval() T

Set the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it.

Returns

self

Return type

Module

extra_repr() str

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

float(*args)

Casts all floating point parameters and buffers to float datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

floating_point_ops(input_dict: Dict[str, Union[Tensor, Any]], exclude_embeddings: bool = True) int

Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model. Default approximation neglects the quadratic dependency on the number of tokens (valid if 12 * d_model << sequence_length) as laid out in [this paper](https://arxiv.org/pdf/2001.08361.pdf) section 2.1. Should be overridden for transformers with parameter re-use e.g. Albert or Universal Transformers, or if doing long-range modeling with very high sequence lengths.

Parameters
  • batch_size (int) – The batch size for the forward pass.

  • sequence_length (int) – The number of tokens in each line of the batch.

  • exclude_embeddings (bool, optional, defaults to True) – Whether or not to count embedding and softmax operations.

Returns

The number of floating-point operations.

Return type

int

forward(input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, encoder_outputs=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, past_key_values=None, head=None, output_adapter_gating_scores=False, output_adapter_fusion_attentions=False, **kwargs)

The [PLBartAdapterModel] forward method, overrides the __call__ special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</Tip>

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using [AutoTokenizer] or [PLBartMultiTokenizer] depending on the checkpoint. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

    [What are input IDs?](../glossary#input-ids)

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    [What are attention masks?](../glossary#attention-mask)

  • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) –

    Indices of decoder input sequence tokens in the vocabulary.

    Indices can be obtained using [AutoTokenizer] or [PLBartMultiTokenizer] depending on the checkpoint. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

    [What are decoder input IDs?](../glossary#decoder-input-ids)

    PLBart uses a specific language id token as the starting token for decoder_input_ids generation that varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).

    For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.

  • ( (decoder_inputs_embeds) – obj:torch.LongTensor of shape (batch_size, target_sequence_length), optional): Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.

  • head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) –

    Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) –

    Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • (

    obj:torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional): Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.

  • (

    obj:tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True): Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • ( – obj:torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • (

    obj:torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

    If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.

  • use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size,), optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

forward_context(context: ForwardContext, *args, **kwargs)

This method is called by the ForwardContext at the beginning of the forward pass.

forward_head(all_outputs, head_name=None, cls_output=None, attention_mask=None, return_dict=False, context=None, **kwargs)

The forward pass through a prediction head configuration. There are three ways to specify the used prediction head configuration (in order of priority):

  1. If a head_name is passed, the head with the given name is used.

  2. If the forward call is executed within an AdapterSetup context, the head configuration is read from the context.

  3. If the active_head property is set, the head configuration is read from there.

Parameters
  • all_outputs (dict) – The outputs of the base model.

  • head_name (str, optional) – The name of the prediction head to use. If None, the active head is used.

  • cls_output (torch.Tensor, optional) – The classification output of the model.

  • attention_mask (torch.Tensor, optional) – The attention mask of the model.

  • return_dict (bool) – Whether or not to return a ModelOutput instead of a plain tuple.

  • get_cls_from_eos_tokens (bool) – If set to True, retrieve classifier token representations from the last <eos> token in the sequence. Setting to True requires eos_mask to be passed as well.

  • **kwargs – Additional keyword arguments passed to the forward pass of the head.

property framework: str

Identifies that this is a PyTorch model.

Type

str

freeze_model(freeze=True)

Freezes all weights of the model.

classmethod from_pretrained(pretrained_model_name_or_path: Optional[Union[str, PathLike]], *model_args, config: Optional[Union[PretrainedConfig, str, PathLike]] = None, cache_dir: Optional[Union[str, PathLike]] = None, ignore_mismatched_sizes: bool = False, force_download: bool = False, local_files_only: bool = False, token: Optional[Union[bool, str]] = None, revision: str = 'main', use_safetensors: Optional[bool] = None, **kwargs) PreTrainedModel

Instantiate a pretrained pytorch model from a pre-trained model configuration.

The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train().

The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.

The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.

If model weights are the same precision as the base model (and is a supported model), weights will be lazily loaded in using the meta device and brought into memory once an input is passed through that layer regardless of low_cpu_mem_usage.

Parameters
  • pretrained_model_name_or_path (str or os.PathLike, optional) –

    Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.

    • A path to a directory containing model weights saved using [~PreTrainedModel.save_pretrained], e.g., ./my_model_directory/.

    • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.

    • A path or url to a model folder containing a flax checkpoint file in .msgpack format (e.g, ./flax_model/ containing flax_model.msgpack). In this case, from_flax should be set to True.

    • None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).

  • model_args (sequence of positional arguments, optional) – All remaining positional arguments will be passed to the underlying model’s __init__ method.

  • config (Union[PretrainedConfig, str, os.PathLike], optional) –

    Can be either:

    • an instance of a class derived from [PretrainedConfig],

    • a string or path valid as input to [~PretrainedConfig.from_pretrained].

    Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).

    • The model was saved using [~PreTrainedModel.save_pretrained] and is reloaded by supplying the save directory.

    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.

  • state_dict (Dict[str, torch.Tensor], optional) –

    A state dictionary to use instead of a state dictionary loaded from saved weights file.

    This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [~PreTrainedModel.save_pretrained] and [~PreTrainedModel.from_pretrained] is not a simpler option.

  • cache_dir (Union[str, os.PathLike], optional) – Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

  • from_tf (bool, optional, defaults to False) – Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).

  • from_flax (bool, optional, defaults to False) – Load the model weights from a Flax checkpoint save file (see docstring of pretrained_model_name_or_path argument).

  • ignore_mismatched_sizes (bool, optional, defaults to False) – Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels).

  • force_download (bool, optional, defaults to False) – Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

  • resume_download – Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.

  • proxies (Dict[str, str], optional) – A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.

  • output_loading_info (bool, optional, defaults to False) – Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.

  • local_files_only (bool, optional, defaults to False) – Whether or not to only look at local files (i.e., do not try to download the model).

  • token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

  • revision (str, optional, defaults to “main”) –

    The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

    <Tip>

    To test a pull request you made on the Hub, you can pass `revision=”refs/pr/<pr_number>”.

    </Tip>

  • mirror (str, optional) – Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.

  • _fast_init (bool, optional, defaults to True) –

    Whether or not to disable fast initialization.

    <Tip warning={true}>

    One should only disable _fast_init to ensure backwards compatibility with transformers.__version__ < 4.6.0 for seeded model initialization. This argument will be removed at the next major version. See [pull request 11471](https://github.com/huggingface/transformers/pull/11471) for more information.

    </Tip>

  • attn_implementation (str, optional) – The attention implementation to use in the model (if relevant). Can be any of “eager” (manual implementation of the attention), “sdpa” (using [F.scaled_dot_product_attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), or “flash_attention_2” (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual “eager” implementation.

  • inference (> Parameters for big model) –

  • low_cpu_mem_usage (bool, optional) –

    Tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Generally should be combined with a device_map (such as “auto”) for best results. This is an experimental feature and a subject to change at any moment. </Tip>

    If the model weights are in the same precision as the model loaded in, low_cpu_mem_usage (without device_map) is redundant and will not provide any benefit in regards to CPU memory usage. However, this should still be enabled if you are passing in a device_map.

    </Tip>

  • torch_dtype (str or torch.dtype, optional) –

    Override the default torch.dtype and load the model under a specific dtype. The different options are:

    1. torch.float16 or torch.bfloat16 or torch.float: load in a specified

    dtype, ignoring the model’s config.torch_dtype if one exists. If not specified - the model will get loaded in torch.float (fp32).

    1. ”auto” - A torch_dtype entry in the config.json file of the model will be

    attempted to be used. If this entry isn’t found then next check the dtype of the first weight in the checkpoint that’s of a floating point type and use that as dtype. This will load the model using the dtype it was saved in at the end of the training. It can’t be used as an indicator of how the model was trained. Since it could be trained in one of half precision dtypes, but saved in fp32.

    1. A string that is a valid torch.dtype. E.g. “float32” loads the model in torch.float32, “float16” loads in torch.float16 etc.

    <Tip>

    For some models the dtype they were trained in is unknown - you may try to check the model’s paper or reach out to the authors and ask them to add this information to the model’s card and to insert the torch_dtype entry in config.json on the hub.

    </Tip>

  • device_map (str or Dict[str, Union[int, str, torch.device]] or int or torch.device, optional) –

    A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (e.g., “cpu”, “cuda:1”, “mps”, or a GPU ordinal rank like 1) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0.

    To have Accelerate compute the most optimized device_map automatically, set device_map=”auto”. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).

  • max_memory (Dict, optional) – A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset.

  • offload_folder (str or os.PathLike, optional) – If the device_map contains any value “disk”, the folder where we will offload weights.

  • offload_state_dict (bool, optional) – If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True when there is some disk offload.

  • offload_buffers (bool, optional) – Whether or not to offload the buffers with the model parameters.

  • quantization_config (Union[QuantizationConfigMixin,Dict], optional) – A dictionary of configuration parameters or a QuantizationConfigMixin object for quantization (e.g bitsandbytes, gptq). There may be other quantization-related kwargs, including load_in_4bit and load_in_8bit, which are parsed by QuantizationConfigParser. Supported only for bitsandbytes quantizations and not preferred. consider inserting all such arguments into quantization_config instead.

  • subfolder (str, optional, defaults to “”) – In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.

  • variant (str, optional) – If specified load weights from variant filename, e.g. pytorch_model.<variant>.bin. variant is ignored when using from_tf or from_flax.

  • use_safetensors (bool, optional, defaults to None) – Whether or not to use safetensors checkpoints. Defaults to None. If not specified and safetensors is not installed, it will be set to False.

  • kwargs (remaining dictionary of keyword arguments, optional) –

    Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)

    • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ([~PretrainedConfig.from_pretrained]). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

<Tip>

Activate the special [“offline-mode”](https://huggingface.co/transformers/installation.html#offline-mode) to use this method in a firewalled environment.

</Tip>

Examples:

```python >>> from transformers import BertConfig, BertModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = BertModel.from_pretrained("google-bert/bert-base-uncased")
>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable).
>>> model = BertModel.from_pretrained("./test/saved_model/")
>>> # Update configuration during loading.
>>> model = BertModel.from_pretrained("google-bert/bert-base-uncased", output_attentions=True)
>>> assert model.config.output_attentions == True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable).
>>> config = BertConfig.from_json_file("./tf_model/my_tf_model_config.json")
>>> model = BertModel.from_pretrained("./tf_model/my_tf_checkpoint.ckpt.index", from_tf=True, config=config)
>>> # Loading from a Flax checkpoint file instead of a PyTorch model (slower)
>>> model = BertModel.from_pretrained("google-bert/bert-base-uncased", from_flax=True)
```
  • low_cpu_mem_usage algorithm:

This is an experimental function that loads the model using ~1x model size CPU memory

Here is how it works:

  1. save which state_dict keys we have

  2. drop state_dict before the model is created, since the latter takes 1x model size CPU memory

3. after the model has been instantiated switch to the meta device all params/buffers that are going to be replaced from the loaded state_dict 4. load state_dict 2nd time 5. replace the params/buffers from the state_dict

Currently, it can’t handle deepspeed ZeRO stage 3 and ignores loading errors

generate(inputs: Optional[Tensor] = None, generation_config: Optional[GenerationConfig] = None, logits_processor: Optional[LogitsProcessorList] = None, stopping_criteria: Optional[StoppingCriteriaList] = None, prefix_allowed_tokens_fn: Optional[Callable[[int, Tensor], List[int]]] = None, synced_gpus: Optional[bool] = None, assistant_model: Optional[PreTrainedModel] = None, streamer: Optional[BaseStreamer] = None, negative_prompt_ids: Optional[Tensor] = None, negative_prompt_attention_mask: Optional[Tensor] = None, **kwargs) Union[GenerateDecoderOnlyOutput, GenerateEncoderDecoderOutput, GenerateBeamDecoderOnlyOutput, GenerateBeamEncoderDecoderOutput, LongTensor]

Generates sequences of token ids for models with a language modeling head.

<Tip warning={true}>

Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True).

For an overview of generation strategies and code examples, check out the [following guide](../generation_strategies).

</Tip>

Parameters
  • inputs (torch.Tensor of varying shape depending on the modality, optional) – The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values.

  • generation_config ([~generation.GenerationConfig], optional) – The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which has the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [~generation.GenerationConfig]’s default values, whose documentation should be checked to parameterize generation.

  • logits_processor (LogitsProcessorList, optional) – Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users.

  • stopping_criteria (StoppingCriteriaList, optional) – Custom stopping criteria that complements the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. If your stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate. This feature is intended for advanced users.

  • prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]], optional) – If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id and input_ids. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id and the previously generated tokens inputs_ids. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904).

  • synced_gpus (bool, optional) – Whether to continue running the while loop until max_length. Unless overridden this flag will be set to True under DeepSpeed ZeRO Stage 3 multiple GPUs environment to avoid hanging if one GPU finished generating before other GPUs. Otherwise it’ll be set to False.

  • assistant_model (PreTrainedModel, optional) – An assistant model that can be used to accelerate generation. The assistant model must have the exact same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistent model is much faster than running generation with the model you’re calling generate from. As such, the assistant model should be much smaller.

  • streamer (BaseStreamer, optional) – Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing.

  • negative_prompt_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) – The negative prompt needed for some processors such as CFG. The batch size must match the input batch size. This is an experimental feature, subject to breaking API changes in future versions.

  • negative_prompt_attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) – Attention_mask for negative_prompt_ids.

  • kwargs (Dict[str, Any], optional) – Ad hoc parametrization of generation_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_.

Returns

A [~utils.ModelOutput] (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.LongTensor.

If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False), the possible [~utils.ModelOutput] types are:

  • [~generation.GenerateDecoderOnlyOutput],

  • [~generation.GenerateBeamDecoderOnlyOutput]

If the model is an encoder-decoder model (model.config.is_encoder_decoder=True), the possible [~utils.ModelOutput] types are:

  • [~generation.GenerateEncoderDecoderOutput],

  • [~generation.GenerateBeamEncoderDecoderOutput]

Return type

[~utils.ModelOutput] or torch.LongTensor

get_adapter(name)

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

get_adapter_state_dict(adapter_name: Optional[str] = None) dict

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter. If no adapter_name is passed, the active adapter is used.

Parameters

adapter_name (str, optional) – The name of the adapter to get the state dict from. If no name is passed, the active adapter is used.

get_buffer(target: str) Tensor

Return the buffer given by target if it exists, otherwise throw an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the buffer to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

The buffer referenced by target

Return type

torch.Tensor

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not a buffer

get_extended_attention_mask(attention_mask: Tensor, input_shape: Tuple[int], device: device = None, dtype: torch.float32 = None) Tensor

Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

Parameters
  • attention_mask (torch.Tensor) – Mask with ones indicating tokens to attend to, zeros for tokens to ignore.

  • input_shape (Tuple[int]) – The shape of the input to the model.

Returns

torch.Tensor The extended attention mask, with a the same dtype as attention_mask.dtype.

get_extra_state() Any

Return any extra state to include in the module’s state_dict.

Implement this and a corresponding set_extra_state() for your module if you need to store extra state. This function is called when building the module’s state_dict().

Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.

Returns

Any extra state to store in the module’s state_dict

Return type

object

get_head_mask(head_mask: Optional[Tensor], num_hidden_layers: int, is_attention_chunked: bool = False) Tensor

Prepare the head mask if needed.

Parameters
  • head_mask (torch.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional) – The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).

  • num_hidden_layers (int) – The number of hidden layers in the model.

  • is_attention_chunked (bool, optional, defaults to False) – Whether or not the attentions scores are computed by chunks or not.

Returns

torch.Tensor with shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] or list with [None] for each layer.

get_input_embeddings() Module

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

get_labels(head_name=None)

Returns the labels the given head is assigning/predictin

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: labels

get_labels_dict(head_name=None)

Returns the id2label dict for the given hea

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • returned (If the name is None the labels of the active head are) –

Returns: id2label

get_memory_footprint(return_buffers=True)

Get the memory footprint of a model. This will return the memory footprint of the current model in bytes. Useful to benchmark the memory footprint of the current model and design some tests. Solution inspired from the PyTorch discussions: https://discuss.pytorch.org/t/gpu-memory-that-model-uses/56822/2

Parameters

return_buffers (bool, optional, defaults to True) – Whether to return the size of the buffer tensors in the computation of the memory footprint. Buffers are tensors that do not require gradients and not registered as parameters. E.g. mean and std in batch norm layers. Please see: https://discuss.pytorch.org/t/what-pytorch-means-by-buffers/120266/2

get_output_embeddings() Union[Module, List[Module]]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

get_parameter(target: str) Parameter

Return the parameter given by target if it exists, otherwise throw an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the Parameter to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

The Parameter referenced by target

Return type

torch.nn.Parameter

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Parameter

get_submodule(target: str) Module

Return the submodule given by target if it exists, otherwise throw an error.

For example, let’s say you have an nn.Module A that looks like this:

A(
    (net_b): Module(
        (net_c): Module(
            (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
        )
        (linear): Linear(in_features=100, out_features=200, bias=True)
    )
)

(The diagram shows an nn.Module A. A has a nested submodule net_b, which itself has two submodules net_c and linear. net_c then has a submodule conv.)

To check whether or not we have the linear submodule, we would call get_submodule("net_b.linear"). To check whether we have the conv submodule, we would call get_submodule("net_b.net_c.conv").

The runtime of get_submodule is bounded by the degree of module nesting in target. A query against named_modules achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, get_submodule should always be used.

Parameters

target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)

Returns

The submodule referenced by target

Return type

torch.nn.Module

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Module

gradient_checkpointing_disable()

Deactivates gradient checkpointing for the current model.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

gradient_checkpointing_enable(gradient_checkpointing_kwargs=None)

Activates gradient checkpointing for the current model.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

We pass the __call__ method of the modules instead of forward because __call__ attaches all the hooks of the module. https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2

Parameters

gradient_checkpointing_kwargs (dict, optional) – Additional keyword arguments passed along to the torch.utils.checkpoint.checkpoint function.

half(*args)

Casts all floating point parameters and buffers to half datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

head_type()

Checks which head type the decorated function belongs to and raises an error if the model does not support the head type.

heal_tokens(input_ids: LongTensor, tokenizer: Optional[PreTrainedTokenizerBase] = None) LongTensor

Generates sequences of token ids for models with a language modeling head. :param input_ids: The sequence used as a prompt for the generation. :type input_ids: torch.LongTensor :param tokenizer: The tokenizer used to decode the input ids. :type tokenizer: PreTrainedTokenizerBase, optional

Returns

torch.LongTensor where each sequence has its tail token replaced with its appropriate extension.

init_adapters(model_config, adapters_config, add_prefix_tuning_pool=True)

This method initializes adapter modules and fusion modules from the model config.

init_weights()

If needed prunes and maybe initializes weights. If using a custom PreTrainedModel, you need to implement any initialization logic in _init_weights.

invert_attention_mask(encoder_attention_mask: Tensor) Tensor

Invert an attention mask (e.g., switches 0. and 1.).

Parameters

encoder_attention_mask (torch.Tensor) – An attention mask.

Returns

The inverted attention mask.

Return type

torch.Tensor

ipu(device: Optional[Union[int, device]] = None) T

Move all model parameters and buffers to the IPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

property is_gradient_checkpointing: bool

Whether gradient checkpointing is activated for this model or not.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

iter_layers() Iterable[Tuple[int, Module]]

Iterates over all layers of the model.

load_adapter(adapter_name_or_path: str, config: Optional[Union[dict, str]] = None, version: Optional[str] = None, model_name: Optional[str] = None, load_as: Optional[str] = None, with_head: bool = True, custom_weights_loaders: Optional[List[WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – Deprecated.

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – Deprecated.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, set_active: bool = False, with_head: bool = True, use_safetensors: bool = False, **kwargs) str

Loads a pre-trained AdapterFusion layer from the local file system.

Parameters
  • adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the AdapterFusion was added to the model.

Return type

str

load_head(save_directory: str, load_as: Optional[str] = None, id2label: Optional[Dict[int, str]] = None, use_safetensors: bool = False, **kwargs) str

Loads a model prediction head from a directory where it was saved using save_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head is saved.

  • load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.

  • id2label (Dict[int, str], optional) – Provide a custom mapping from class ids to class labels. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.

Returns

The name with which the prediction head was added to the model.

Return type

str

load_state_dict(state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False)

Copy parameters and buffers from state_dict into this module and its descendants.

If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Warning

If assign is True the optimizer must be created after the call to load_state_dict unless get_swap_module_params_on_conversion() is True.

Parameters
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True

  • assign (bool, optional) – When False, the properties of the tensors in the current module are preserved while when True, the properties of the Tensors in the state dict are preserved. The only exception is the requires_grad field of Default: ``False`

Returns

  • missing_keys is a list of str containing any keys that are expected

    by this module but missing from the provided state_dict.

  • unexpected_keys is a list of str containing the keys that are not

    expected by this module but present in the provided state_dict.

Return type

NamedTuple with missing_keys and unexpected_keys fields

Note

If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError.

merge_adapter(name: str)

Merges the weights of the given LoRA module with the Transformer weights as described in the paper.

Parameters

name (str) – LoRA module to merge.

modules() Iterator[Module]

Return an iterator over all modules in the network.

Yields

Module – a module in the network

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
...     print(idx, '->', m)

0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]]

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Parameters
  • prefix (str) – prefix to prepend to all buffer names.

  • recurse (bool, optional) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.

  • remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.

Yields

(str, torch.Tensor) – Tuple containing the name and buffer

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, buf in self.named_buffers():
>>>     if name in ['running_var']:
>>>         print(buf.size())
named_children() Iterator[Tuple[str, Module]]

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields

(str, Module) – Tuple containing a name and child module

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
named_modules(memo: Optional[Set[Module]] = None, prefix: str = '', remove_duplicate: bool = True)

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Parameters
  • memo – a memo to store the set of modules already added to the result

  • prefix – a prefix that will be added to the name of the module

  • remove_duplicate – whether to remove the duplicated module instances in the result or not

Yields

(str, Module) – Tuple of name and module

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
...     print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]]

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

  • remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.

Yields

(str, Parameter) – Tuple containing the name and parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, param in self.named_parameters():
>>>     if name in ['bias']:
>>>         print(param.size())
num_parameters(only_trainable: bool = False, exclude_embeddings: bool = False) int

Get number of (optionally, trainable or non-embeddings) parameters in the module.

Parameters
  • only_trainable (bool, optional, defaults to False) – Whether or not to return only the number of trainable parameters

  • exclude_embeddings (bool, optional, defaults to False) – Whether or not to return only the number of non-embeddings parameters

Returns

The number of parameters.

Return type

int

parameters(recurse: bool = True) Iterator[Parameter]

Return an iterator over module parameters.

This is typically passed to an optimizer.

Parameters

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

Parameter – module parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
post_init()

A method executed at the end of each Transformer model initialization, to execute code that needs the model’s modules properly initialized (such as weight initialization).

prune_heads(heads_to_prune: Dict[int, List[int]])

Prunes heads of the base model.

Parameters

heads_to_prune (Dict[int, List[int]]) – Dictionary with keys being selected layer indices (int) and associated values being the list of heads to prune in said layer (list of int). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.

push_adapter_to_hub(repo_id: str, adapter_name: str, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, overwrite_adapter_card: bool = False, create_pr: bool = False, revision: Optional[str] = None, commit_description: Optional[str] = None, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_id (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

  • create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.

  • revision (str, optional) – Branch to push the uploaded files to.

  • commit_description (str, optional) – The description of the commit that will be created

Returns

The url of the adapter repository on the model hub.

Return type

str

push_to_hub(repo_id: str, use_temp_dir: Optional[bool] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, max_shard_size: Optional[Union[int, str]] = '5GB', create_pr: bool = False, safe_serialization: bool = True, revision: str = None, commit_description: str = None, tags: Optional[List[str]] = None, **deprecated_kwargs) str

Upload the model file to the 🤗 Model Hub.

Parameters
  • repo_id (str) – The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization.

  • use_temp_dir (bool, optional) – Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.

  • commit_message (str, optional) – Message to commit while pushing. Will default to “Upload model”.

  • private (bool, optional) – Whether or not the repository created should be private.

  • token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

  • max_shard_size (int or str, optional, defaults to “5GB”) – Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to “5GB” so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues.

  • create_pr (bool, optional, defaults to False) – Whether or not to create a PR with the uploaded files or directly commit.

  • safe_serialization (bool, optional, defaults to True) – Whether or not to convert the model weights in safetensors format for safer serialization.

  • revision (str, optional) – Branch to push the uploaded files to.

  • commit_description (str, optional) – The description of the commit that will be created

  • tags (List[str], optional) – List of tags to push on the Hub.

Examples:

```python from transformers import AutoModel

model = AutoModel.from_pretrained(“google-bert/bert-base-cased”)

# Push the model to your namespace with the name “my-finetuned-bert”. model.push_to_hub(“my-finetuned-bert”)

# Push the model to an organization with the name “my-finetuned-bert”. model.push_to_hub(“huggingface/my-finetuned-bert”) ```

register_backward_hook(hook: Callable[[Module, Union[Tuple[Tensor, ...], Tensor], Union[Tuple[Tensor, ...], Tensor]], Union[None, Tuple[Tensor, ...], Tensor]]) RemovableHandle

Register a backward hook on the module.

This function is deprecated in favor of register_full_backward_hook() and the behavior of this function will change in future versions.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_buffer(name: str, tensor: Optional[Tensor], persistent: bool = True) None

Add a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Parameters
  • name (str) – name of the buffer. The buffer can be accessed from this module using the given name

  • tensor (Tensor or None) – buffer to be registered. If None, then operations that run on buffers, such as cuda, are ignored. If None, the buffer is not included in the module’s state_dict.

  • persistent (bool) – whether the buffer is part of this module’s state_dict.

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> self.register_buffer('running_mean', torch.zeros(num_features))
classmethod register_for_auto_class(auto_class='AutoModel')

Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class.

<Tip warning={true}>

This API is experimental and may have some slight breaking changes in the next releases.

</Tip>

Parameters

auto_class (str or type, optional, defaults to “AutoModel”) – The auto class to register this new model with.

register_forward_hook(hook: Union[Callable[[T, Tuple[Any, ...], Any], Optional[Any]], Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Optional[Any]]], *, prepend: bool = False, with_kwargs: bool = False, always_call: bool = False) RemovableHandle

Register a forward hook on the module.

The hook will be called every time after forward() has computed an output.

If with_kwargs is False or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. The hook should have the following signature:

hook(module, args, output) -> None or modified output

If with_kwargs is True, the forward hook will be passed the kwargs given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:

hook(module, args, kwargs, output) -> None or modified output
Parameters
  • hook (Callable) – The user defined hook to be registered.

  • prepend (bool) – If True, the provided hook will be fired before all existing forward hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing forward hooks on this torch.nn.modules.Module. Note that global forward hooks registered with register_module_forward_hook() will fire before all hooks registered by this method. Default: False

  • with_kwargs (bool) – If True, the hook will be passed the kwargs given to the forward function. Default: False

  • always_call (bool) – If True the hook will be run regardless of whether an exception is raised while calling the Module. Default: False

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_forward_pre_hook(hook: Union[Callable[[T, Tuple[Any, ...]], Optional[Any]], Callable[[T, Tuple[Any, ...], Dict[str, Any]], Optional[Tuple[Any, Dict[str, Any]]]]], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle

Register a forward pre-hook on the module.

The hook will be called every time before forward() is invoked.

If with_kwargs is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:

hook(module, args) -> None or modified input

If with_kwargs is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:

hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
Parameters
  • hook (Callable) – The user defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing forward_pre hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing forward_pre hooks on this torch.nn.modules.Module. Note that global forward_pre hooks registered with register_module_forward_pre_hook() will fire before all hooks registered by this method. Default: False

  • with_kwargs (bool) – If true, the hook will be passed the kwargs given to the forward function. Default: False

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_full_backward_hook(hook: Callable[[Module, Union[Tuple[Tensor, ...], Tensor], Union[Tuple[Tensor, ...], Tensor]], Union[None, Tuple[Tensor, ...], Tensor]], prepend: bool = False) RemovableHandle

Register a backward hook on the module.

The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Parameters
  • hook (Callable) – The user-defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing backward hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing backward hooks on this torch.nn.modules.Module. Note that global backward hooks registered with register_module_full_backward_hook() will fire before all hooks registered by this method.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_full_backward_pre_hook(hook: Callable[[Module, Union[Tuple[Tensor, ...], Tensor]], Union[None, Tuple[Tensor, ...], Tensor]], prepend: bool = False) RemovableHandle

Register a backward pre-hook on the module.

The hook will be called every time the gradients for the module are computed. The hook should have the following signature:

hook(module, grad_output) -> tuple[Tensor] or None

The grad_output is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place of grad_output in subsequent computations. Entries in grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs inplace is not allowed when using backward hooks and will raise an error.

Parameters
  • hook (Callable) – The user-defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing backward_pre hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing backward_pre hooks on this torch.nn.modules.Module. Note that global backward_pre hooks registered with register_module_full_backward_pre_hook() will fire before all hooks registered by this method.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_load_state_dict_post_hook(hook)

Register a post hook to be run after module’s load_state_dict is called.

It should have the following signature::

hook(module, incompatible_keys) -> None

The module argument is the current module that this hook is registered on, and the incompatible_keys argument is a NamedTuple consisting of attributes missing_keys and unexpected_keys. missing_keys is a list of str containing the missing keys and unexpected_keys is a list of str containing the unexpected keys.

The given incompatible_keys can be modified inplace if needed.

Note that the checks performed when calling load_state_dict() with strict=True are affected by modifications the hook makes to missing_keys or unexpected_keys, as expected. Additions to either set of keys will result in an error being thrown when strict=True, and clearing out both missing and unexpected keys will avoid an error.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_module(name: str, module: Optional[Module]) None

Alias for add_module().

register_parameter(name: str, param: Optional[Parameter]) None

Add a parameter to the module.

The parameter can be accessed as an attribute using given name.

Parameters
  • name (str) – name of the parameter. The parameter can be accessed from this module using the given name

  • param (Parameter or None) – parameter to be added to the module. If None, then operations that run on parameters, such as cuda, are ignored. If None, the parameter is not included in the module’s state_dict.

register_state_dict_pre_hook(hook)

Register a pre-hook for the state_dict() method.

These hooks will be called with arguments: self, prefix, and keep_vars before calling state_dict on self. The registered hooks can be used to perform pre-processing before the state_dict call is made.

requires_grad_(requires_grad: bool = True) T

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.

Parameters

requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True.

Returns

self

Return type

Module

reset_adapter()

Resets weights of a LoRA module merged using model.merge_adapter(name).

reset_memory_hooks_state()

Reset the mem_rss_diff attribute of each module (see [~modeling_utils.ModuleUtilsMixin.add_memory_hooks]).

resize_token_embeddings(new_num_tokens: Optional[int] = None, pad_to_multiple_of: Optional[int] = None) Embedding

Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size.

Takes care of tying weights embeddings afterwards if the model class has a tie_weights() method.

Parameters
  • new_num_tokens (int, optional) – The new number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None, just returns a pointer to the input tokens torch.nn.Embedding module of the model without doing anything.

  • pad_to_multiple_of (int, optional) –

    If set will pad the embedding matrix to a multiple of the provided value.If new_num_tokens is set to None will just pad the embedding to a multiple of pad_to_multiple_of.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc

Returns

Pointer to the input tokens Embeddings Module of the model.

Return type

torch.nn.Embedding

reverse_bettertransformer()

Reverts the transformation from [~PreTrainedModel.to_bettertransformer] so that the original modeling is used, for example in order to save the model.

Returns

The model converted back to the original modeling.

Return type

[PreTrainedModel]

save_adapter(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: Union[Fuse, list, str], meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, with_head: Union[bool, str] = False, use_safetensors: bool = False)

Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion should be saved.

  • adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.

  • with_head (Union[bool, str]) – If True, will save a head with the same name as the AdapterFusionLayer. If a string, this will be used as the name of the head to be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

Raises

ValueError – If the given AdapterFusion name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_adapters(save_directory: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters
  • save_directory (str) – Path to a directory where the adapters should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_all_heads(save_directory: str, use_safetensors: bool = False)

Saves all prediction heads of this model to subfolders of the given location.

Parameters
  • save_directory (str) – Path to the base directory where prediction heads should be saved.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_head(save_directory: str, head_name: Optional[str] = None, use_safetensors: bool = False) None

Saves a model prediction head to a directory such that it can be reloaded using load_head().

Parameters
  • save_directory (str) – Path to the directory where the prediction head should be saved.

  • head_name (str, optional) – Name of the head to save. Set to None if model only has one head. Defaults to None.

  • use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.

save_pretrained(save_directory: Union[str, PathLike], **kwargs)

Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.

Parameters
  • save_directory (str or os.PathLike) – Directory to which to save. Will be created if it doesn’t exist.

  • is_main_process (bool, optional, defaults to True) – Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.

  • state_dict (nested dictionary of torch.Tensor) – The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).

  • save_function (Callable) – The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method.

  • push_to_hub (bool, optional, defaults to False) – Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).

  • max_shard_size (int or str, optional, defaults to “5GB”) –

    The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to 5GB in order for models to be able to run easily on free-tier google colab instances without CPU OOM issues.

    <Tip warning={true}>

    If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size.

    </Tip>

  • safe_serialization (bool, optional, defaults to True) – Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).

  • variant (str, optional) – If specified, weights are saved in the format pytorch_model.<variant>.bin.

  • token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

  • save_peft_format (bool, optional, defaults to True) – For backward compatibility with PEFT library, in case adapter weights are attached to the model, all keys of the state dict of adapters needs to be pre-pended with base_model.model. Advanced users can disable this behaviours by setting save_peft_format to False.

  • kwargs (Dict[str, Any], optional) – Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.

set_active_adapters(adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

set_adapter(adapter_name: Union[List[str], str]) None

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft

Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters.

Parameters

adapter_name (Union[List[str], str]) – The name of the adapter to set. Can be also a list of strings to set multiple adapters.

set_extra_state(state: Any) None

Set extra state contained in the loaded state_dict.

This function is called from load_state_dict() to handle any extra state found within the state_dict. Implement this function and a corresponding get_extra_state() for your module if you need to store extra state within its state_dict.

Parameters

state (dict) – Extra state from the state_dict

set_input_embeddings(value: Module)

Set model’s input embeddings.

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

share_memory() T

See torch.Tensor.share_memory_().

state_dict(*args, destination=None, prefix='', keep_vars=False)

Return a dictionary containing references to the whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Note

The returned object is a shallow copy. It contains references to the module’s parameters and buffers.

Warning

Currently state_dict() also accepts positional arguments for destination, prefix and keep_vars in order. However, this is being deprecated and keyword arguments will be enforced in future releases.

Warning

Please avoid the use of argument destination as it is not designed for end-users.

Parameters
  • destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an OrderedDict will be created and returned. Default: None.

  • prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default: ''.

  • keep_vars (bool, optional) – by default the Tensor s returned in the state dict are detached from autograd. If it’s set to True, detaching will not be performed. Default: False.

Returns

a dictionary containing a whole state of the module

Return type

dict

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> module.state_dict().keys()
['bias', 'weight']
tie_weights()

Tie the weights between the input embeddings and the output embeddings.

If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Parameters
  • device (torch.device) – the desired device of the parameters and buffers in this module

  • dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module

  • tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

  • memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)

Returns

self

Return type

Module

Examples:

>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_bettertransformer() PreTrainedModel

Converts the model to use [PyTorch’s native attention implementation](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html), integrated to Transformers through [Optimum library](https://huggingface.co/docs/optimum/bettertransformer/overview). Only a subset of all Transformers models are supported.

PyTorch’s attention fastpath allows to speed up inference through kernel fusions and the use of [nested tensors](https://pytorch.org/docs/stable/nested.html). Detailed benchmarks can be found in [this blog post](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2).

Returns

The model converted to BetterTransformer.

Return type

[PreTrainedModel]

to_empty(*, device: Optional[Union[int, str, device]], recurse: bool = True) T

Move the parameters and buffers to the specified device without copying storage.

Parameters
  • device (torch.device) – The desired device of the parameters and buffers in this module.

  • recurse (bool) – Whether parameters and buffers of submodules should be recursively moved to the specified device.

Returns

self

Return type

Module

train(mode: bool = True) T

Set the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns

self

Return type

Module

train_adapter(adapter_setup: Union[list, AdapterCompositionBlock], train_embeddings=False)

Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

train_adapter_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

type(dst_type: Union[dtype, str]) T

Casts all parameters and buffers to dst_type.

Note

This method modifies the module in-place.

Parameters

dst_type (type or string) – the desired type

Returns

self

Return type

Module

warn_if_padding_and_no_attention_mask(input_ids, attention_mask)

Shows a one-time warning if the input_ids appear to contain padding and no attention mask was given.

xpu(device: Optional[Union[int, device]] = None) T

Move all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

zero_grad(set_to_none: bool = True) None

Reset gradients of all model parameters.

See similar function under torch.optim.Optimizer for more context.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.