XLM-RoBERTa

The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.

Note

This class is nearly identical to the PyTorch implementation of XLM-RoBERTa in Huggingface Transformers. For more information, visit the corresponding section in their documentation.

XLMRobertaConfig

class transformers.XLMRobertaConfig(pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs)

This class overrides RobertaConfig. Please check the superclass for the appropriate documentation alongside usage examples.

XLMRobertaTokenizer

class transformers.XLMRobertaTokenizer(vocab_file, bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', sp_model_kwargs: Optional[Dict[str, Any]] = None, **kwargs)

Adapted from RobertaTokenizer and class:~transformers.XLNetTokenizer. Based on SentencePiece.

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • bos_token (str, optional, defaults to "<s>") –

    The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

  • eos_token (str, optional, defaults to "</s>") –

    The end of sequence token.

    Note

    When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

  • sep_token (str, optional, defaults to "</s>") – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

  • cls_token (str, optional, defaults to "<s>") – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

  • unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

  • mask_token (str, optional, defaults to "<mask>") – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

  • additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) – Additional special tokens used by the tokenizer.

  • sp_model_kwargs (dict, optional) –

    Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:

    • enable_sampling: Enable subword regularization.

    • nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.

      • nbest_size = {0,1}: No sampling is performed.

      • nbest_size > 1: samples from the nbest_size results.

      • nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.

    • alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.

sp_model

The SentencePiece processor that is used for every conversion (string, tokens and IDs).

Type

SentencePieceProcessor

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:

  • single sequence: <s> X </s>

  • pair of sequences: <s> A </s></s> B </s>

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of input IDs with the appropriate special tokens.

Return type

List[int]

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int]

Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of zeros.

Return type

List[int]

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int]

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

  • already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.

Returns

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

List[int]

save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str]

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.

Parameters
  • save_directory (str) – The directory in which to save the vocabulary.

  • filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.

Returns

Paths to the files saved.

Return type

Tuple(str)

XLMRobertaModel

class transformers.XLMRobertaModel(config, add_pooling_layer=True)

The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaModel. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaModelWithHeads

class transformers.XLMRobertaModelWithHeads(config)

XLM-RoBERTa Model with the option to add multiple flexible heads on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaModelWithHeads. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForMaskedLM

class transformers.XLMRobertaForMaskedLM(config)

XLM-RoBERTa Model with a language modeling head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForMaskedLM. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForSequenceClassification

class transformers.XLMRobertaForSequenceClassification(config)

XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForSequenceClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForMultipleChoice

class transformers.XLMRobertaForMultipleChoice(config)

XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForMultipleChoice. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig

XLMRobertaForTokenClassification

class transformers.XLMRobertaForTokenClassification(config)

XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (XLMRobertaConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This class overrides RobertaForTokenClassification. Please check the superclass for the appropriate documentation alongside usage examples.

config_class

alias of transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig