CLIP¶
Note
- Adapter implementation notes:
CLIP consists of two separate Transformer encoder models, a ViT-style Transformer for visual features and a language model for textual features. Both encoders can be fitted with adapters. As usual, the
leave_out
parameter can be used to specify the layers in which adapters should be added. For CLIP, layer IDs are counted globally across both encoders, starting from the text encoder. I.e., for a CLIP model with 12 layers in each Transformer encoder, the text encoder will have IDs 0-11 and the vision encoder will have IDs 12-23.As CLIP does not come with pre-supported task-specific prediction heads, there is currently no
CLIPAdapterModel
class. UseCLIPModel
instead.
The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
The abstract from the paper is the following:
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.
CLIPTextModel¶
- class transformers.CLIPTextModel(config: CLIPTextConfig)¶
The text model from CLIP without any head or projection on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.
- config_class¶
alias of
CLIPTextConfig
- forward(input_ids: Optional[Tensor] = None, attention_mask: Optional[Tensor] = None, position_ids: Optional[Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) Union[Tuple, BaseModelOutputWithPooling] ¶
The [CLIPTextModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
[What are attention masks?](../glossary#attention-mask)
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
[What are position IDs?](../glossary#position-ids)
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
Returns –
[transformers.modeling_outputs.BaseModelOutputWithPooling] or tuple(torch.FloatTensor): A [transformers.modeling_outputs.BaseModelOutputWithPooling] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPTextConfig’>]) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples –
```python –
AutoTokenizer (>>> from transformers import) –
CLIPTextModel –
CLIPTextModel.from_pretrained (>>> model =) –
AutoTokenizer.from_pretrained (>>> tokenizer =) –
tokenizer (>>> inputs =) –
model (>>> outputs =) –
outputs.last_hidden_state (>>> last_hidden_state =) –
pooled (>>> pooled_output = outputs.pooler_output #) –
``` –
- get_input_embeddings() Module ¶
Returns the model’s input embeddings.
- Returns
A torch module mapping vocabulary to hidden states.
- Return type
nn.Module
- set_input_embeddings(value)¶
Set model’s input embeddings.
- Parameters
value (nn.Module) – A module mapping vocabulary to hidden states.
CLIPVisionModel¶
- class transformers.CLIPVisionModel(config: CLIPVisionConfig)¶
The vision model from CLIP without any head or projection on top. This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.
- config_class¶
alias of
CLIPVisionConfig
- forward(pixel_values: Optional[FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) Union[Tuple, BaseModelOutputWithPooling] ¶
The [CLIPVisionModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
Returns –
[transformers.modeling_outputs.BaseModelOutputWithPooling] or tuple(torch.FloatTensor): A [transformers.modeling_outputs.BaseModelOutputWithPooling] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPVisionConfig’>]) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) – Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples –
```python –
Image (>>> from PIL import) –
requests (>>> import) –
AutoProcessor (>>> from transformers import) –
CLIPVisionModel –
CLIPVisionModel.from_pretrained (>>> model =) –
AutoProcessor.from_pretrained (>>> processor =) –
"http (>>> url =) – //images.cocodataset.org/val2017/000000039769.jpg”
Image.open (>>> image =) –
processor (>>> inputs =) –
model (>>> outputs =) –
outputs.last_hidden_state (>>> last_hidden_state =) –
states (>>> pooled_output = outputs.pooler_output # pooled CLS) –
``` –
- get_input_embeddings() Module ¶
Returns the model’s input embeddings.
- Returns
A torch module mapping vocabulary to hidden states.
- Return type
nn.Module
CLIPModel¶
- class transformers.CLIPModel(config: CLIPConfig)¶
This model inherits from [PreTrainedModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config ([CLIPConfig]) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [~PreTrainedModel.from_pretrained] method to load the model weights.
- config_class¶
alias of
CLIPConfig
- forward(input_ids: Optional[LongTensor] = None, pixel_values: Optional[FloatTensor] = None, attention_mask: Optional[Tensor] = None, position_ids: Optional[LongTensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) Union[Tuple, CLIPOutput] ¶
The [CLIPModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
[What are attention masks?](../glossary#attention-mask)
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
[What are position IDs?](../glossary#position-ids)
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.
return_loss (bool, optional) – Whether or not to return the contrastive loss.
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
Returns –
[transformers.models.clip.modeling_clip.CLIPOutput] or tuple(torch.FloatTensor): A [transformers.models.clip.modeling_clip.CLIPOutput] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([<class ‘transformers.models.clip.configuration_clip.CLIPConfig’>]) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) – Contrastive loss for image-text similarity.
logits_per_image:(`torch.FloatTensor` of shape (image_batch_size, text_batch_size)) – The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.
logits_per_text:(`torch.FloatTensor` of shape (text_batch_size, image_batch_size)) – The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.
text_embeds(`torch.FloatTensor` of shape (batch_size, output_dim) – The text embeddings obtained by applying the projection layer to the pooled output of [CLIPTextModel].
image_embeds(`torch.FloatTensor` of shape (batch_size, output_dim) – The image embeddings obtained by applying the projection layer to the pooled output of [CLIPVisionModel].
text_model_output(`BaseModelOutputWithPooling`): The output of the [CLIPTextModel].
vision_model_output(`BaseModelOutputWithPooling`): The output of the [CLIPVisionModel].
Examples –
```python –
Image (>>> from PIL import) –
requests (>>> import) –
AutoProcessor (>>> from transformers import) –
CLIPModel –
CLIPModel.from_pretrained (>>> model =) –
AutoProcessor.from_pretrained (>>> processor =) –
"http (>>> url =) – //images.cocodataset.org/val2017/000000039769.jpg”
Image.open (>>> image =) –
processor( (>>> inputs =) –
cat" (... text=["a photo of a) –
dog"] ("a photo of a) –
images=image –
return_tensors="pt" –
padding=True –
) (...) –
model (>>> outputs =) –
score (>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity) –
logits_per_image.softmax (>>> probs =) –
``` –
- get_image_features(pixel_values: Optional[FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) FloatTensor ¶
The [CLIPModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) – Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor]. See [CLIPImageProcessor.__call__] for details.
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
Returns – image_features (torch.FloatTensor of shape (batch_size, output_dim): The image embeddings obtained by applying the projection layer to the pooled output of [CLIPVisionModel].
Examples –
```python –
Image (>>> from PIL import) –
requests (>>> import) –
AutoProcessor (>>> from transformers import) –
CLIPModel –
CLIPModel.from_pretrained (>>> model =) –
AutoProcessor.from_pretrained (>>> processor =) –
"http (>>> url =) – //images.cocodataset.org/val2017/000000039769.jpg”
Image.open (>>> image =) –
processor (>>> inputs =) –
model.get_image_features (>>> image_features =) –
``` –
- get_text_features(input_ids: Optional[Tensor] = None, attention_mask: Optional[Tensor] = None, position_ids: Optional[Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None) FloatTensor ¶
The [CLIPModel] forward method, overrides the __call__ special method.
<Tip>
Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
</Tip>
- Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) –
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
[What are attention masks?](../glossary#attention-mask)
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) –
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
[What are position IDs?](../glossary#position-ids)
output_attentions (bool, optional) – Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) – Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) – Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.
Returns – text_features (torch.FloatTensor of shape (batch_size, output_dim): The text embeddings obtained by applying the projection layer to the pooled output of [CLIPTextModel].
Examples –
```python –
AutoTokenizer (>>> from transformers import) –
CLIPModel –
CLIPModel.from_pretrained (>>> model =) –
AutoTokenizer.from_pretrained (>>> tokenizer =) –
tokenizer (>>> inputs =) –
model.get_text_features (>>> text_features =) –
``` –