RoBERTa

The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018.

Note

This class is nearly identical to the PyTorch implementation of RoBERTa in Huggingface Transformers. For more information, visit the corresponding section in their documentation.

RobertaConfig

class transformers.RobertaConfig(pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs)

This is the configuration class to store the configuration of a RobertaModel or a TFRobertaModel. It is used to instantiate a RoBERTa model according to the specified arguments, defining the model architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

The RobertaConfig class directly inherits BertConfig. It reuses the same defaults. Please check the parent class for more information.

Examples:

>>> from transformers import RobertaConfig, RobertaModel

>>> # Initializing a RoBERTa configuration
>>> configuration = RobertaConfig()

>>> # Initializing a model from the configuration
>>> model = RobertaModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

RobertaTokenizer

class transformers.RobertaTokenizer(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)

Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

>>> from transformers import RobertaTokenizer
>>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
>>> tokenizer("Hello world")['input_ids']
[0, 31414, 232, 328, 2]
>>> tokenizer(" Hello world")['input_ids']
[0, 20920, 232, 2]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

Note

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • merges_file (str) – Path to the merges file.

  • errors (str, optional, defaults to "replace") – Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

  • bos_token (str, optional, defaults to "<s>") –

    The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

  • eos_token (str, optional, defaults to "</s>") –

    The end of sequence token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • sep_token (str, optional, defaults to "</s>") – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

  • cls_token (str, optional, defaults to "<s>") – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

  • unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

  • mask_token (str, optional, defaults to "<mask>") – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

  • add_prefix_space (bool, optional, defaults to False) – Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (RoBERTa tokenizer detect beginning of words by the preceding space).

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

  • single sequence: <s> X </s>

  • pair of sequences: <s> A </s></s> B </s>

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of input IDs with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int]

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of zeros.

Return type

List[int]

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int]

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

  • already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.

Returns

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

List[int]

save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str]

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.

Parameters
  • save_directory (str) – The directory in which to save the vocabulary.

  • filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.

Returns

Paths to the files saved.

Return type

Tuple(str)

RobertaModel

class transformers.RobertaModel(config, add_pooling_layer=True)

The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.

To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs)

The RobertaModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

    Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

  • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –

    Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

    If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

  • use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

Returns

A BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

Return type

BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaModel
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaModel.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
get_input_embeddings()

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

set_input_embeddings(value)

Set model’s input embeddings.

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

RobertaModelWithHeads

class transformers.RobertaModelWithHeads(config)

Roberta Model transformer with the option to add multiple flexible heads on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

property active_head

The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.

Returns

A string or a list of strings describing the active head configuration.

Return type

Union[str, List[str]]

add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds a new adapter module of the specified type to the model.

Parameters
  • adapter_name (str) – The name of the adapter module to be added.

  • config (str or dict, optional) –

    The adapter configuration, can be either:

    • the string identifier of a pre-defined configuration dictionary

    • a configuration dictionary specifying the full config

    • if not given, the default configuration for this adapter type will be used

  • overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.

  • set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

add_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list], config=None, overwrite_ok: bool = False, set_active: bool = False)

Adds AdapterFusion to the model with alll the necessary configurations and weight initializations

Parameters
  • adapter_names – a list of adapter names which should be fused

  • adapter_fusion_config (str or dict) –

    adapter fusion configuration, can be either:

    • a string identifying a pre-defined adapter fusion configuration

    • a dictionary representing the adapter fusion configuration

    • the path to a file containing the adapter fusion configuration

  • override_kwargs – dictionary items for values which should be overwritten in the default AdapterFusion configuration

  • set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.

add_causal_lm_head(head_name, activation_function='gelu', overwrite_ok=False)

Adds a causal language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • activation_function (str, optional) – Activation function. Defaults to ‘gelu’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_classification_head(head_name, num_labels=2, layers=2, activation_function='tanh', overwrite_ok=False, multilabel=False, id2label=None, use_pooler=False)

Adds a sequence classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • multilabel (bool, optional) – Enable multilabel classification setup. Defaults to False.

add_dependency_parsing_head(head_name, num_labels=2, overwrite_ok=False, id2label=None)

Adds a biaffine dependency parsing head on top of the model. The parsing head uses the architecture described in “Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation” (Glavaš & Vulić, 2021) (https://arxiv.org/pdf/2008.06788.pdf).

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of labels. Defaults to 2.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

  • id2label (dict, optional) – Mapping from label ids to labels. Defaults to None.

add_masked_lm_head(head_name, activation_function='gelu', overwrite_ok=False)

Adds a masked language modeling head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • activation_function (str, optional) – Activation function. Defaults to ‘gelu’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_multiple_choice_head(head_name, num_choices=2, layers=2, activation_function='tanh', overwrite_ok=False, id2label=None, use_pooler=False)

Adds a multiple choice head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_choices (int, optional) – Number of choices. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 2.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

add_tagging_head(head_name, num_labels=2, layers=1, activation_function='tanh', overwrite_ok=False, id2label=None)

Adds a token classification head on top of the model.

Parameters
  • head_name (str) – The name of the head.

  • num_labels (int, optional) – Number of classification labels. Defaults to 2.

  • layers (int, optional) – Number of layers. Defaults to 1.

  • activation_function (str, optional) – Activation function. Defaults to ‘tanh’.

  • overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.

delete_adapter(adapter_name: str)

Deletes the adapter with the specified name from the model.

Parameters

adapter_name (str) – The name of the adapter.

delete_adapter_fusion(adapter_names: Union[transformers.adapters.composition.Fuse, list])

Deletes the AdapterFusion layer of the specified adapters.

Parameters

adapter_names (Union[Fuse, list]) – List of adapters for which to delete the AdapterFusion layer.

delete_head(head_name: str)

Deletes the prediction head with the specified name from the model.

Parameters

head_name (str) – The name of the prediction to delete.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, adapter_names=None, head=None, **kwargs)

The RobertaModelWithHeads forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

    Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

Returns

A ModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

Return type

ModelOutput or tuple(torch.FloatTensor)

Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular python dictionary.

Warning

You can’t unpack a ModelOutput directly. Use the to_tuple() method to convert it to a tuple before.

Example:

>>> from transformers import RobertaTokenizer, RobertaModelWithHeads
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaModelWithHeads.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
freeze_model(freeze=True)

Freezes all weights of the model.

get_adapter(name)

If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

get_labels(head_name=None)

Returns the labels the given head is assigning/predictin

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • the name is None the labels of the active head are returned (If) –

Returns: labels

get_labels_dict(head_name=None)

Returns the id2label dict for the given hea

Parameters
  • head_name – (str, optional) the name of the head which labels should be returned. Default is None.

  • the name is None the labels of the active head are returned (If) –

Returns: id2label

load_adapter(adapter_name_or_path: str, config: Union[dict, str] = None, version: str = None, model_name: str = None, load_as: str = None, source: str = 'ah', with_head: bool = True, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, **kwargs) → str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter. If not specified, will be either: - the default adapter config for the requested adapter if specified - the global default adapter config

  • version (str, optional) – The version of the adapter to be loaded.

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • source (str, optional) –

    Identifier of the source(s) from where to load the adapter. Can be:

    • ”ah” (default): search on AdapterHub.

    • ”hf”: search on HuggingFace model hub.

    • None: only search on local file system

  • leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.

  • set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.

Returns

The name with which the adapter was added to the model.

Return type

str

load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: str = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None, set_active: bool = False, **kwargs) → str

Loads a pre-trained pytorch adapter module from the local file system or a remote location.

Parameters
  • adapter_fusion_name_or_path (str) –

    can be either:

    • the identifier of a pre-trained task adapter fusion module to be loaded from Adapter Hub

    • a path to a directory containing adapter weights saved using model.saved_adapter()

    • a URL pointing to a zip folder containing a saved adapter module

  • config (dict or str, optional) – The requested configuration of the adapter fusion. If not specified, will be either: - the default adapter config for the requested adapter fusion if specified - the global default adapter fusion config

  • model_name (str, optional) – The string identifier of the pre-trained model.

  • load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.

  • set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.

Returns

The name with which the adapter was added to the model.

Return type

str

pre_transformer_forward(**kwargs)

This method should be called by every adapter-implementing model at the very beginning of the forward() method.

push_adapter_to_hub(repo_name: str, adapter_name: str, organization: Optional[str] = None, adapterhub_tag: Optional[str] = None, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Union[bool, str] = True, overwrite_adapter_card: bool = False, adapter_card_kwargs: Optional[dict] = None)

Upload an adapter to HuggingFace’s Model Hub.

Parameters
  • repo_name (str) – The name of the repository on the model hub to upload to.

  • adapter_name (str) – The name of the adapter to be uploaded.

  • organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.

  • adapterhub_tag (str, optional) – Tag of the format <task>/<subtask> for categorization on https://adapterhub.ml/explore/. See https://docs.adapterhub.ml/contributing.html#add-a-new-task-or-subtask for more. If not specified, datasets_tag must be given in case a new adapter card is generated. Defaults to None.

  • datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. If not specified, adapterhub_tag must be given in case a new adapter card is generated. Defaults to None.

  • local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config", "add tokenizer" or "add model" depending on the type of the class.

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface). Defaults to True.

  • overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.

Returns

The url of the adapter repository on the model hub.

Return type

str

save_adapter(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

Raises

ValueError – If the given adapter name is invalid.

save_adapter_fusion(save_directory: str, adapter_names: list, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().

Parameters
  • save_directory (str) – Path to a directory where the adapter should be saved.

  • adapter_name (str) – Name of the adapter to be saved.

Raises

ValueError – If the given adapter name is invalid.

save_all_adapter_fusions(save_directory: str, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the adapters should be saved.

save_all_adapters(save_directory: str, with_head: bool = True, meta_dict: dict = None, custom_weights_loaders: Optional[List[transformers.adapters.loading.WeightsLoader]] = None)

Saves all adapters of this model together with their configuration to subfolders of the given location.

Parameters

save_directory (str) – Path to a directory where the adapters should be saved.

set_active_adapters(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)

Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.

Parameters

adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.

tie_weights()

Tie the weights between the input embeddings and the output embeddings.

If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

train_adapter(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock])

Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

train_adapter_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion

train_fusion(adapter_setup: Union[list, transformers.adapters.composition.AdapterCompositionBlock], unfreeze_adapters=False)

Sets the model into mode for training of adapter fusion determined by a list of adapter names.

RobertaForMaskedLM

class transformers.RobertaForMaskedLM(config)

RoBERTa Model with a language modeling head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, adapter_names=None)

The RobertaForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

    Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

  • kwargs (Dict[str, any], optional, defaults to {}) – Used to hide legacy arguments that have been deprecated.

Returns

A MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Masked language modeling (MLM) loss.

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

MaskedLMOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForMaskedLM
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForMaskedLM.from_pretrained('roberta-base')

>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]

>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits
get_output_embeddings()

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

RobertaForSequenceClassification

class transformers.RobertaForSequenceClassification(config)

RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, adapter_names=None)

The RobertaForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

    Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size,), optional) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

A SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Classification (or regression if config.num_labels==1) loss.

  • logits (torch.FloatTensor of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

SequenceClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForSequenceClassification
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForSequenceClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits

RobertaForTokenClassification

class transformers.RobertaForTokenClassification(config)

Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (RobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, adapter_names=None)

The RobertaForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using RobertaTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

    • 1 corresponds to a sentence B token.

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) –

    Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) – Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Returns

A TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RobertaConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Classification loss.

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) – Classification scores (before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

TokenClassifierOutput or tuple(torch.FloatTensor)

Example:

>>> from transformers import RobertaTokenizer, RobertaForTokenClassification
>>> import torch

>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForTokenClassification.from_pretrained('roberta-base')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0)  # Batch size 1

>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits