AdapterLayer

class transformers.AdapterLayer(location_key: str, config)
adapter_fusion(adapter_setup: transformers.adapters.composition.Fuse, hidden_states, input_tensor, layer_norm, lvl=0)

Performs adapter fusion with the given adapters for the given input.

adapter_layer_forward(hidden_states, input_tensor, layer_norm)

Called for each forward pass through adapters.

adapter_parallel(adapter_setup: transformers.adapters.composition.Parallel, hidden_states, input_tensor, layer_norm, lvl=0)

For parallel execution of the adapters on the same input. This means that the input is repeated N times before feeding it to the adapters (where N is the number of adapters).

adapter_split(adapter_setup: transformers.adapters.composition.Split, hidden_states, input_tensor, layer_norm, lvl=0)

Splits the given input between the given adapters.

adapter_stack(adapter_setup: transformers.adapters.composition.Stack, hidden_states, input_tensor, layer_norm, lvl=0)

Forwards the given input through the given stack of adapters.

add_fusion_layer(adapter_names: Union[List, str])

See BertModel.add_fusion_layer

enable_adapters(adapter_setup: transformers.adapters.composition.AdapterCompositionBlock, unfreeze_adapters: bool, unfreeze_fusion: bool)

Unfreezes a given list of adapters, the adapter fusion layer, or both

Parameters
  • adapter_names – names of adapters to unfreeze (or names of adapters part of the fusion layer to unfreeze)

  • unfreeze_adapters – whether the adapter weights should be activated

  • unfreeze_fusion – whether the adapter fusion layer for the given adapters should be activated

forward(hidden_states, input_tensor, layer_norm)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.