Whisper¶
The Whisper model was presented in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
Whisper is a state-of-the-art speech recognition model trained on 680,000 hours of multilingual and multitask data, presented by OpenAI.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
WhisperAdapterModel¶
- class adapters.WhisperAdapterModel(config: WhisperConfig, **kwargs)¶
WHISPER Model with the option to add multiple flexible prediction heads on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config ([WhisperConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.
- property active_adapters: AdapterCompositionBlock¶
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft
Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly.
For previous PEFT versions (that does not support multi-adapter inference), module.active_adapter will return a single string.
- property active_head: Union[str, List[str]]¶
The active prediction head configuration of this model. Can be either the name of a single available head (string) or a list of multiple available heads. In case of a list of heads, the same base model is forwarded through all specified heads.
- Returns
A string or a list of strings describing the active head configuration.
- Return type
Union[str, List[str]]
- adapter_fusion_to(adapter_names: Union[Fuse, list, str], device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)¶
Moves the adapter fusion layer with the given name to the specified device and data type.
- Parameters
adapter_names (Union[Fuse, list, str]) – The name of the adapter fusion layer to be moved.
device (torch.device or str, optional) – The device on which the adapter fusion layer should be moved.
dtype (torch.dtype, optional) – The data type to which the adapter fusion layer should be cast.
- adapter_summary(as_dict=False) Union[str, dict] ¶
Returns a string summary of all adapters currently added to the model. Each entry in the summary table has the following attributes:
name: the name of the adapter
architecture: the architectural base of the adapter
#param: the number of parameters of the adapter
%param: the number of parameters of the adapter relative to the full model
active: whether the adapter is active
train: whether the adapter weights are enabled for training
- adapter_to(name: str, device: Optional[Union[device, str]] = None, dtype: Optional[dtype] = None)¶
Moves the adapter with the given name to the specified device and data type.
- Parameters
name (str) – The name of the adapter to be moved.
device (torch.device or str, optional) – The device on which the adapter should be moved.
dtype (torch.dtype, optional) – The data type to which the adapter should be cast.
- add_adapter(adapter_name: str, config=None, overwrite_ok: bool = False, set_active: bool = False)¶
Adds a new adapter module of the specified type to the model.
- Parameters
adapter_name (str) – The name of the adapter module to be added.
config (str or dict, optional) –
The adapter configuration, can be either:
the string identifier of a pre-defined configuration dictionary
a configuration dictionary specifying the full config
if not given, the default configuration for this adapter type will be used
overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.
If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
- add_adapter_fusion(adapter_names: Union[Fuse, list, str], config=None, overwrite_ok: bool = False, set_active: bool = False)¶
Adds AdapterFusion to the model with alll the necessary configurations and weight initializations
- Parameters
adapter_names (Fuse or list or str) –
AdapterFusion layer to add. Can be either:
a
Fuse
composition blocka list of adapter names to fuse
a comma-separated string of adapter names to fuse
config (str or dict) –
adapter fusion configuration, can be either:
a string identifying a pre-defined adapter fusion configuration
a dictionary representing the adapter fusion configuration
the path to a file containing the adapter fusion configuration
overwrite_ok (bool, optional) – Overwrite an AdapterFusion layer with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Activate the added AdapterFusion. By default (False), the AdapterFusion is added but not activated.
- add_seq2seq_lm_head(head_name, layers=1, overwrite_ok=False)¶
Adds a sequence-to-sequence language modeling head on top of the model.
- Parameters
head_name (str) – The name of the head.
layers (int, optional) – Number of layers. Defaults to 1.
overwrite_ok (bool, optional) – Force overwrite if a head with the same name exists. Defaults to False.
- apply_to_adapter_layers(fn)¶
Applies a function to all adapter layers of the model.
- apply_to_basemodel_childs(fn)¶
Applies a function to all direct childs of the model if they are a instance of AdapterLayerBase.
- average_adapter(adapter_name: str, adapter_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, combine_strategy: str = 'linear', normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False, svd_rank: Optional[int] = None)¶
Adds a new adapter module as weighted average of a set of existing adapter modules.
- Parameters
adapter_name (str) – The name of the adapter module to be added.
adapter_list (List[str] or Dict[str, float]) – Specifies the existing adapters whose weights should be averaged. Can either be a list of adapter names or a dictionary mapping adapter names to weights.
weights (Optional[List[float]], optional) – The weights corresponding to each adapter module in the list. If not provided, equal weights will be assigned to each adapter.
combine_strategy (str, optional) – The strategy to combine the adapter modules. Available options are “linear”, “lora_linear_only_negate_b”, and “lora_delta_w_svd”. See https://docs.adapterhub.ml/adapter_composition.html#merging-adapters Defaults to “linear”.
normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.
overwrite_ok (bool, optional) – Overwrite an adapter with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Set the adapter to be the active one. By default (False), the adapter is added but not activated.
svd_rank (int, optional) – The rank to be used for Singular Value Decomposition (SVD) when averaging LoRA adapters. This parameter is only applicable when the combine_strategy is set to “lora_delta_w_svd”. Defaults to None.
- average_head(head_name: str, head_list: Union[List[str], Dict[str, float]], weights: Optional[List[float]] = None, normalize_weights: bool = True, overwrite_ok: bool = False, set_active: bool = False)¶
Adds a new prediction head as a weighted average of a set of existing prediction heads.
- Parameters
head_name (str) – The name of the new prediction head to be added.
head_list (List[str] or Dict[str, float]) – Specifies the existing heads whose weights should be averaged. Can either be a list of head names or a dictionary mapping head names to weights.
weights (Optional[List[float]], optional) – The weights corresponding to each head in the list. If not provided, equal weights will be assigned to each head.
normalize_weights (bool, optional) – Whether to normalize the weights. If True, the weights will be normalized to sum up to 1. Defaults to True.
overwrite_ok (bool, optional) – Overwrite a head with the same name if it exists. By default (False), an exception is thrown.
set_active (bool, optional) – Set the head to be the active one. By default (False), the head is added but not activated.
- delete_adapter(adapter_name: str)¶
Deletes the adapter with the specified name from the model.
- Parameters
adapter_name (str) – The name of the adapter.
- delete_adapter_fusion(adapter_names: Union[Fuse, list, str])¶
Deletes the AdapterFusion layer of the specified adapters.
- Parameters
adapter_names (Union[Fuse, list, str]) – AdapterFusion layer to delete.
- delete_head(head_name: str)¶
Deletes the prediction head with the specified name from the model.
- Parameters
head_name (str) – The name of the prediction to delete.
- eject_prefix_tuning(name: str)¶
Converts the prefix tuning with the given name from the reparameterized form into the flat form.
- Parameters
name (str) – The name of the prefix tuning.
- forward(input_features=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, encoder_outputs=None, past_key_values=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, head=None, output_adapter_gating_scores=False, output_adapter_fusion_attentions=False, **kwargs)¶
The [WhisperAdapterModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
input_features (torch.FloatTensor of shape (batch_size, feature_size, sequence_length)) – Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_features, the [AutoFeatureExtractor] should be used for extracting the mel features, padding and conversion into a tensor of type torch.FloatTensor. See [~WhisperFeatureExtractor.__call__]
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing SpecAugment data augmentation on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
[What are attention masks?](../glossary#attention-mask)
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) –
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using [WhisperTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are decoder input IDs?](../glossary#decoder-input-ids)
Whisper uses the decoder_start_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) –
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should read [modeling_whisper._prepare_decoder_attention_mask] and modify to your needs. See diagram 1 in [the BART paper](https://arxiv.org/abs/1910.13461) for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) –
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) –
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) –
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) – Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (EncoderDecoderCache or tuple(tuple(torch.FloatTensor)), optional) –
Pre-computed hidden-states that can be used to speed up auto-regressive (sequential) decoding. There are four sets of pre-computed hidden-states: key and values states in the self-attention blocks (2) and in the cross-attention blocks (2). The past_key_values are returned when use_cache=True is passed or when config.use_cache=True
Two formats are allowed: - An [~cache_utils.EncoderDecoderCache] instance; - Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) – Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) – If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
cache_position (torch.LongTensor of shape (sequence_length), optional) – Indices depicting the position of the input sequence tokens in the sequence. It is used to update the cache in the correct position and to infer the complete sequence length.
labels (
torch.LongTensor
of shape(batch_size,)
, optional) – Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
- forward_context(context: ForwardContext, *args, **kwargs)¶
This method is called by the
ForwardContext
at the beginning of the forward pass.
- forward_head(all_outputs, head_name=None, cls_output=None, attention_mask=None, return_dict=False, context=None, **kwargs)¶
The forward pass through a prediction head configuration. There are three ways to specify the used prediction head configuration (in order of priority):
If a head_name is passed, the head with the given name is used.
If the forward call is executed within an
AdapterSetup
context, the head configuration is read from the context.If the
active_head
property is set, the head configuration is read from there.
- Parameters
all_outputs (dict) – The outputs of the base model.
head_name (str, optional) – The name of the prediction head to use. If None, the active head is used.
cls_output (torch.Tensor, optional) – The classification output of the model.
attention_mask (torch.Tensor, optional) – The attention mask of the model.
return_dict (bool) – Whether or not to return a
ModelOutput
instead of a plain tuple.get_cls_from_eos_tokens (bool) – If set to True, retrieve classifier token representations from the last <eos> token in the sequence. Setting to True requires eos_mask to be passed as well.
**kwargs – Additional keyword arguments passed to the forward pass of the head.
- freeze_encoder()¶
Calling this function will disable the gradient computation for the Whisper encoder so that its parameters will not be updated during training.
- freeze_model(freeze=True)¶
Freezes all weights of the model.
- get_adapter(name)¶
If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
- get_labels(head_name=None)¶
Returns the labels the given head is assigning/predictin
- Parameters
head_name – (str, optional) the name of the head which labels should be returned. Default is None.
returned (If the name is None the labels of the active head are) –
Returns: labels
- get_labels_dict(head_name=None)¶
Returns the id2label dict for the given hea
- Parameters
head_name – (str, optional) the name of the head which labels should be returned. Default is None.
returned (If the name is None the labels of the active head are) –
Returns: id2label
- get_output_embeddings() Union[Module, List[Module]] ¶
Returns the model’s output embeddings.
- Returns
A torch module mapping hidden states to vocabulary.
- Return type
nn.Module
- head_type()¶
Checks which head type the decorated function belongs to and raises an error if the model does not support the head type.
- init_adapters(model_config, adapters_config, add_prefix_tuning_pool=True)¶
This method initializes adapter modules and fusion modules from the model config.
- iter_layers() Iterable[Tuple[int, Module]] ¶
Iterates over all layers of the model.
- load_adapter(adapter_name_or_path: str, config: Optional[Union[dict, str]] = None, version: Optional[str] = None, model_name: Optional[str] = None, load_as: Optional[str] = None, with_head: bool = True, custom_weights_loaders: Optional[List[WeightsLoader]] = None, leave_out: Optional[List[int]] = None, id2label=None, set_active: bool = False, use_safetensors: bool = False, **kwargs) str ¶
Loads a pre-trained pytorch adapter module from the local file system or a remote location.
- Parameters
adapter_name_or_path (str) –
can be either:
the identifier of a pre-trained task adapter to be loaded from Adapter Hub
a path to a directory containing adapter weights saved using model.saved_adapter()
a URL pointing to a zip folder containing a saved adapter module
config (dict or str, optional) – Deprecated.
version (str, optional) – The version of the adapter to be loaded.
model_name (str, optional) – Deprecated.
load_as (str, optional) – Load the adapter using this name. By default, the name with which the adapter was saved will be used.
leave_out – Dynamically drop adapter modules in the specified Transformer layers when loading the adapter.
set_active (bool, optional) – Set the loaded adapter to be the active one. By default (False), the adapter is loaded but not activated.
use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.
- Returns
The name with which the adapter was added to the model.
- Return type
str
- load_adapter_fusion(adapter_fusion_name_or_path: str, load_as: Optional[str] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, set_active: bool = False, with_head: bool = True, use_safetensors: bool = False, **kwargs) str ¶
Loads a pre-trained AdapterFusion layer from the local file system.
- Parameters
adapter_fusion_name_or_path (str) – a path to a directory containing AdapterFusion weights saved using model.save_adapter_fusion().
load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.
set_active (bool, optional) – Activate the loaded AdapterFusion. By default (False), the AdapterFusion is loaded but not activated.
use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.
- Returns
The name with which the AdapterFusion was added to the model.
- Return type
str
- load_head(save_directory: str, load_as: Optional[str] = None, id2label: Optional[Dict[int, str]] = None, use_safetensors: bool = False, **kwargs) str ¶
Loads a model prediction head from a directory where it was saved using save_head().
- Parameters
save_directory (str) – Path to the directory where the prediction head is saved.
load_as (str, optional) – Load the AdapterFusion using this name. By default, the name with which the AdapterFusion layer was saved will be used.
id2label (Dict[int, str], optional) – Provide a custom mapping from class ids to class labels. Defaults to None.
use_safetensors (bool, optional) – If True, weights are loaded via safetensors if safetensors checkpoint is available. Otherwise, the regular torch save method is used.
- Returns
The name with which the prediction head was added to the model.
- Return type
str
- merge_adapter(name: str)¶
Merges the weights of the given LoRA module with the Transformer weights as described in the paper.
- Parameters
name (str) – LoRA module to merge.
- push_adapter_to_hub(repo_id: str, adapter_name: str, datasets_tag: Optional[str] = None, local_path: Optional[str] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, overwrite_adapter_card: bool = False, create_pr: bool = False, revision: Optional[str] = None, commit_description: Optional[str] = None, adapter_card_kwargs: Optional[dict] = None)¶
Upload an adapter to HuggingFace’s Model Hub.
- Parameters
repo_id (str) – The name of the repository on the model hub to upload to.
adapter_name (str) – The name of the adapter to be uploaded.
organization (str, optional) – Organization in which to push the adapter (you must be a member of this organization). Defaults to None.
datasets_tag (str, optional) – Dataset identifier from https://huggingface.co/datasets. Defaults to None.
local_path (str, optional) – Local path used as clone directory of the adapter repository. If not specified, will create a temporary directory. Defaults to None.
commit_message (
str
, optional) – Message to commit while pushing. Will default to"add config"
,"add tokenizer"
or"add model"
depending on the type of the class.private (
bool
, optional) – Whether or not the repository created should be private (requires a paying subscription).token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
overwrite_adapter_card (bool, optional) – Overwrite an existing adapter card with a newly generated one. If set to False, will only generate an adapter card, if none exists. Defaults to False.
create_pr (bool, optional) – Whether or not to create a PR with the uploaded files or directly commit.
revision (str, optional) – Branch to push the uploaded files to.
commit_description (str, optional) – The description of the commit that will be created
- Returns
The url of the adapter repository on the model hub.
- Return type
str
- reset_adapter()¶
Resets weights of a LoRA module merged using model.merge_adapter(name).
- save_adapter(save_directory: str, adapter_name: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)¶
Saves an adapter and its configuration file to a directory so that it can be shared or reloaded using load_adapter().
- Parameters
save_directory (str) – Path to a directory where the adapter should be saved.
adapter_name (str) – Name of the adapter to be saved.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- Raises
ValueError – If the given adapter name is invalid.
- save_adapter_fusion(save_directory: str, adapter_names: Union[Fuse, list, str], meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, with_head: Union[bool, str] = False, use_safetensors: bool = False)¶
Saves an AdapterFusion layer and its configuration file to a directory so that it can be shared or reloaded using load_adapter_fusion().
- Parameters
save_directory (str) – Path to a directory where the AdapterFusion should be saved.
adapter_names (Union[Fuse, list, str]) – AdapterFusion to be saved.
with_head (Union[bool, str]) – If True, will save a head with the same name as the AdapterFusionLayer. If a string, this will be used as the name of the head to be saved.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- Raises
ValueError – If the given AdapterFusion name is invalid.
- save_all_adapter_fusions(save_directory: str, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)¶
Saves all AdapterFusion layers of this model together with their configuration to subfolders of the given location.
- Parameters
save_directory (str) – Path to a directory where the AdapterFusion layers should be saved.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- save_all_adapters(save_directory: str, with_head: bool = True, meta_dict: Optional[dict] = None, custom_weights_loaders: Optional[List[WeightsLoader]] = None, use_safetensors: bool = False)¶
Saves all adapters of this model together with their configuration to subfolders of the given location.
- Parameters
save_directory (str) – Path to a directory where the adapters should be saved.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- save_all_heads(save_directory: str, use_safetensors: bool = False)¶
Saves all prediction heads of this model to subfolders of the given location.
- Parameters
save_directory (str) – Path to the base directory where prediction heads should be saved.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- save_head(save_directory: str, head_name: Optional[str] = None, use_safetensors: bool = False) None ¶
Saves a model prediction head to a directory such that it can be reloaded using load_head().
- Parameters
save_directory (str) – Path to the directory where the prediction head should be saved.
head_name (str, optional) – Name of the head to save. Set to None if model only has one head. Defaults to None.
use_safetensors (bool, optional) – If True, weights are saved via safetensors. Otherwise, the regular torch save method is used.
- save_pretrained(save_directory: Union[str, PathLike], **kwargs)¶
Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.
- Parameters
save_directory (str or os.PathLike) – Directory to which to save. Will be created if it doesn’t exist.
is_main_process (bool, optional, defaults to True) – Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
state_dict (nested dictionary of torch.Tensor) – The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).
save_function (Callable) – The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method.
push_to_hub (bool, optional, defaults to False) – Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
max_shard_size (int or str, optional, defaults to “5GB”) –
The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to 5GB in order for models to be able to run easily on free-tier google colab instances without CPU OOM issues.
<Tip warning={true}>
If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size.
</Tip>
safe_serialization (bool, optional, defaults to True) – Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).
variant (str, optional) – If specified, weights are saved in the format pytorch_model.<variant>.bin.
token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
save_peft_format (bool, optional, defaults to True) – For backward compatibility with PEFT library, in case adapter weights are attached to the model, all keys of the state dict of adapters needs to be pre-pended with base_model.model. Advanced users can disable this behaviours by setting save_peft_format to False.
kwargs (Dict[str, Any], optional) – Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.
- set_active_adapters(adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None)¶
Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing the adapter_names parameter in the foward() pass. If no adapter with the given name is found, no module of the respective type will be activated. In case the calling model class supports named prediction heads, this method will attempt to activate a prediction head with the name of the last adapter in the list of passed adapter names.
- Parameters
adapter_setup (list) – The list of adapters to be activated by default. Can be a fusion or stacking configuration.
- tie_weights()¶
Tie the weights between the input embeddings and the output embeddings.
If the
torchscript
flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.
- train_adapter(adapter_setup: Union[list, AdapterCompositionBlock], train_embeddings=False)¶
Sets the model into mode for training the given adapters. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion
- train_adapter_fusion(adapter_setup: Union[list, AdapterCompositionBlock], unfreeze_adapters=False)¶
Sets the model into mode for training of adapter fusion determined by a list of adapter names. If self.base_model is self, must inherit from a class that implements this method, to preclude infinite recursion