Adding Adapters to a Model¶
This document gives an overview of how new model architectures of Hugging Face Transformers can be supported by
Before delving into implementation details, you should familiarize yourself with the main design philosophies of
Adapters should integrate seamlessly with existing model classes: If a model architecture supports adapters, it should be possible to use them with all model classes of this architecture.
Copied code should be minimal:
adaptersextensively uses Python mixins to add adapter support to HF models. Functions that cannot be sufficiently modified by mixins are copied and then modified. Try to avoid copying functions as much as possible.
Adding adapter support to an existing model architecture requires modifying some parts of the model forward pass logic. These modifications are realized by the four files in the
src/adapters/models/<model_type>/ directory. Let’s examine the purpose of these files in the example of BERT. It’s important to note that we are adapting the original Hugging Face model, implemented in transformers/models/bert/modeling_bert.py. The files in
src/adapters/models/bert/mixin_bert.py: This file contains mixins for each class we want to change. For example, in the
BertSelfAttentionclass, we need to make changes for LoRA and Prefix Tuning. For this, we create a
BertSelfAttentionAdaptersMixinto implement these changes. We will discuss how this works in detail below.
src/adapters/models/bert/modeling_bert.py: For some classes of the BERT implementation (e.g.
BertLayer) the code can be sufficiently customized via mixins. For other classes (like
BertSelfAttention), we need to edit the original code directly. These classes are copied into
src/adapters/models/bert/adapter_model.py: In this file, the adapter model class is defined. This class allows flexible adding of and switching between multiple prediction heads of different types. This looks about the same for each model, except that each model has different heads and thus different
src/adapters/models/bert/__init__.py: Defines Python’s import structure.
Implementation Steps 📝¶
Now that we have discussed the purpose of every file in
src/adapters/models/<model_type>/, we go through the integration of adapters into an existing model architecture step by step. The following steps might not be applicable to every model architecture.
src/adapters/models/<model_type>/directory and in it the 4 files:
src/adapters/models/<model_type>/mixin_<model_type>.py, create mixins for any class you want to change and where you can’t reuse an existing mixin from another class.
To figure out which classes to change, think about where to insert LoRA, Prefix Tuning, and bottleneck adapters.
You can use similar model implementations for guidance.
Often, existing mixins of another class can be reused. E.g.
BertGenerationLayer(all models derived from BERT) use the
To additionally support Prefix Tuning, it’s necessary to apply the forward call to the
PrefixTuningLayermodule in the respective attention layer (see step 3 for how to modify the code of an Hugging Face class).
Make sure the calls to
bottleneck_layer_forward()are added in the right places.
The mixin for the whole base model class (e.g.,
BertModel) should derive from
ModelBaseAdaptersMixinand (if possible)
InvertibleAdaptersMixin. This mixin should at least implement the
iter_layers()method but might require additional modifications depending on the architecture.
If the model is a combination of different models, such as the EncoderDecoderModel, use
For those classes where the mixin is not enough to realize the wanted behavior, you must:
Create a new class in
src/adapters/models/<model_type>/modeling_<model_type>.pywith the name
<class>WithAdapters. This class should derive from the corresponding mixin and HF class.
Copy the function you want to change into this class and modify it.
forwardmethod of the
BertSelfAttentionclass must be adapted to support prefix tuning. We therefore create a class
BertSelfAttentionWithAdapters(BertSelfAttentionAdaptersMixin, BertSelfAttention), copy the forward method into it and modify it.
For each mixin whose class was not copied into
modeling_<model_type>.py, add the mixin/class combination into
MODEL_MIXIN_MAPPINGin the file
Create the adapter model:
Adapter-supporting architectures should provide a new model class
<model_type>AdapterModel. This class allows flexible adding of and switching between multiple prediction heads of different types.
This is done in the
This module should implement the
<model_type>AdapterModelclass, deriving from
In the model class, add methods for those prediction heads that make sense for the new model architecture.
Again, have a look at existing implementations.
Define the classes to be added to Python’s import structure in
src/adapters/models/<model_type>/__init__.py. This will likely only be the
Adapt the config classes:
Adapt the config class to the requirements of adapters in
There are some naming differences in the config attributes of different model architectures. The adapter implementation requires some additional attributes with a specific name to be available. These currently are
attention_probs_dropout_probas in the
BertConfigclass. If your model config does not provide these, add corresponding mappings to
❓ In addition to the general Hugging Face model tests, there are adapter-specific test cases. All tests are executed from the
tests folder. You need to add two different test classes.
Add a new
This file is used to test that everything related to the usage of adapters (adding, removing, activating, …) works.
This module typically holds 2 test classes and a test base class:
<model_type>AdapterTestBase: This class contains the
<model_type>AdapterTestderives from a collection of test mixins that hold various adapter tests (depending on the implementation).
<model_type>ClassConversionTestruns tests for correct class conversion if conversion of prediction heads is implemented.
Add a new
This file is used to test the AdapterModel class.
This module typically holds 1 test class with the name
<model_type>AdapterModelTestderives directly from Hugging Face’s existing model test class
<model_type>AdapterModelas a class to test.
❓ The documentation for
adapters lives in the
docs/classes/models/<model_type>.rst(oriented at the doc file in the HF docs). Make sure to include
<model_type>AdapterModelautodoc. Finally, list the file in
Add a new row for the model in the model table of the overview page at
docs/model_overview.md, listing all the methods implemented by the new model.
Training Example Adapters¶
❓ To make sure the new adapter implementation works properly, it is useful to train some example adapters and compare the training results to full model fine-tuning. Ideally, this would include training adapters on one (or more) tasks that are good for demonstrating the new model architecture (e.g. GLUE benchmark for BERT, summarization for BART) and uploading them to AdapterHub.
We provide training scripts for many tasks here: https://github.com/Adapter-Hub/adapters/tree/main/examples/pytorch/