Encoder Decoder Models¶
Note
- Adapter implementation notes:
Unlike other models, an explicit EncoderDecoderAdapterModel for the EncoderDecoderModel has not been implemented. This decision was made due to the lack of support for the EncoderDecoderModel in Hugging Face Transformers’
AutoModel
class. As a result, ourAutoAdapterModel
class would not support the EncoderDecoderAdapterModel either. Thus, to use an EncoderDecoderModel with Adapters, follow these steps:First, create an
EncoderDecoderModel
instance, for example, usingmodel = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
.Next, convert this model to an adapter model using the
adapters.init(model)
function.
Adapters can be added to both the encoder and the decoder. As usual, the
leave_out
parameter can be used to specify the layers where adapters are to be added. For the EncoderDecoderModel the layer IDs are counted seperately over the encoder and decoder starting from 0. Thus, specifyingleave_out=[0,1]
will leave out the first and second layer of the encoder and the first and second layer of the decoder.
The EncoderDecoderModel
can be used to initialize a sequence-to-sequence model with any
pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
After such an EncoderDecoderModel
has been trained/fine-tuned, it can be saved/loaded just like
any other models (see the examples for more information).
An application of this architecture could be to leverage two pretrained BertModel
as the encoder
and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata.
EncoderDecoderModel¶
- class transformers.EncoderDecoderModel(config: Optional[PretrainedConfig] = None, encoder: Optional[PreTrainedModel] = None, decoder: Optional[PreTrainedModel] = None)¶
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via [~AutoModel.from_pretrained] function and the decoder is loaded via [~AutoModelForCausalLM.from_pretrained] function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config ([EncoderDecoderConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.
[EncoderDecoderModel] is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth*~transformers.AutoModel.from_pretrained* class method for the encoder and :meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder.
- forward(input_ids: Optional[LongTensor] = None, attention_mask: Optional[FloatTensor] = None, decoder_input_ids: Optional[LongTensor] = None, decoder_attention_mask: Optional[BoolTensor] = None, encoder_outputs: Optional[Tuple[FloatTensor]] = None, past_key_values: Optional[Tuple[Tuple[FloatTensor]]] = None, inputs_embeds: Optional[FloatTensor] = None, decoder_inputs_embeds: Optional[FloatTensor] = None, labels: Optional[LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs) Union[Tuple, Seq2SeqLMOutput] ¶
The [EncoderDecoderModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [PreTrainedTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
[What are attention masks?](../glossary#attention-mask)
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) –
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using [PreTrainedTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are input IDs?](../glossary#input-ids)
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For training, decoder_input_ids are automatically created by the model by shifting the labels to the right, replacing -100 by the pad_token_id and prepending them with the decoder_start_token_id.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) – Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
encoder_outputs (tuple(torch.FloatTensor), optional) – This tuple must consist of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) – Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, …, config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, …, config.vocab_size]
use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – If set to True, the model will return a [~utils.Seq2SeqLMOutput] instead of a plain tuple.
kwargs (optional) –
Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
Without a prefix which will be input as **encoder_kwargs for the encoder forward function.
With a decoder_ prefix which will be input as **decoder_kwargs for the decoder forward function.
Returns –
[transformers.modeling_outputs.Seq2SeqLMOutput] or tuple(torch.FloatTensor): A [transformers.modeling_outputs.Seq2SeqLMOutput] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([EncoderDecoderConfig]) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples –
```python –
EncoderDecoderModel (>>> from transformers import) –
BertTokenizer –
torch (>>> import) –
BertTokenizer.from_pretrained (>>> tokenizer =) –
EncoderDecoderModel.from_encoder_decoder_pretrained( (>>> model =) –
"google-bert/bert-base-uncased" (...) –
"google-bert/bert-base-uncased" –
checkpoints (... ) # initialize Bert2Bert from pre-trained) –
training (>>> #) –
tokenizer.cls_token_id (>>> model.config.decoder_start_token_id =) –
tokenizer.pad_token_id (>>> model.config.pad_token_id =) –
model.config.decoder.vocab_size (>>> model.config.vocab_size =) –
tokenizer (>>> labels =) –
tokenizer –
model (>>> outputs =) –
loss (>>>) –
outputs.loss (logits =) –
outputs.logits –
pretrained (>>> # save and load from) –
model.save_pretrained (>>>) –
EncoderDecoderModel.from_pretrained (>>> model =) –
generation (>>> #) –
model.generate (>>> generated =) –
``` –
- classmethod from_encoder_decoder_pretrained(encoder_pretrained_model_name_or_path: Optional[str] = None, decoder_pretrained_model_name_or_path: Optional[str] = None, *model_args, **kwargs) PreTrainedModel ¶
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with model.train().
- Params:
- encoder_pretrained_model_name_or_path (str, optional):
Information necessary to initiate the encoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
A path to a directory containing model weights saved using [~PreTrainedModel.save_pretrained], e.g., ./my_model_directory/.
A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- decoder_pretrained_model_name_or_path (str, optional, defaults to None):
Information necessary to initiate the decoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
A path to a directory containing model weights saved using [~PreTrainedModel.save_pretrained], e.g., ./my_model_directory/.
A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- model_args (remaining positional arguments, optional):
All remaining positional arguments will be passed to the underlying model’s __init__ method.
- kwargs (remaining dictionary of keyword arguments, optional):
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True).
To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a config is provided or automatically loaded.
Example:
```python >>> from transformers import EncoderDecoderModel
>>> # initialize a bert2bert from two pretrained BERT models. Note that the cross-attention layers will be randomly initialized >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") >>> # saving model after fine-tuning >>> model.save_pretrained("./bert2bert") >>> # load fine-tuned model >>> model = EncoderDecoderModel.from_pretrained("./bert2bert") ```