BiEncoderTokenizer

class lightning_ir.bi_encoder.tokenizer.BiEncoderTokenizer(*args, query_expansion: bool = False, query_length: int = 32, attend_to_query_expanded_tokens: bool = False, doc_expansion: bool = False, doc_length: int = 512, attend_to_doc_expanded_tokens: bool = False, add_marker_tokens: bool = True, **kwargs)[source]

Bases: LightningIRTokenizer

__init__(*args, query_expansion: bool = False, query_length: int = 32, attend_to_query_expanded_tokens: bool = False, doc_expansion: bool = False, doc_length: int = 512, attend_to_doc_expanded_tokens: bool = False, add_marker_tokens: bool = True, **kwargs)[source]

LightningIRTokenizer for bi-encoder models. Encodes queries and documents separately. Optionally adds marker tokens are added to encoded input sequences.

Parameters:
  • query_expansion (bool, optional) – Whether to expand queries with mask tokens, defaults to False

  • query_length (int, optional) – Maximum query length in number of tokens, defaults to 32

  • attend_to_query_expanded_tokens (bool, optional) – Whether to let non-expanded query tokens be able to attend to mask expanded query tokens, defaults to False

  • doc_expansion (bool, optional) – Whether to expand documents with mask tokens, defaults to False

  • doc_length (int, optional) – Maximum document length in number of tokens, defaults to 512

  • attend_to_doc_expanded_tokens (bool, optional) – Whether to let non-expanded document tokens be able to attend to mask expanded document tokens, defaults to False

  • add_marker_tokens (bool, optional) – Whether to add marker tokens to the query and document input sequences, defaults to True

Raises:

ValueError – If add_marker_tokens is True and a non-supported tokenizer is used

Methods

__init__(*args[, query_expansion, ...])

LightningIRTokenizer for bi-encoder models.

add_special_tokens(special_tokens_dict[, ...])

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes.

add_tokens(new_tokens[, special_tokens])

Add a list of new tokens to the tokenizer class.

apply_chat_template(conversation[, tools, ...])

Converts a list of dictionaries with "role" and "content" keys to a list of token ids.

as_target_tokenizer()

Temporarily sets the tokenizer for encoding the targets.

batch_decode(sequences[, ...])

Convert a list of lists of token ids into a list of strings by calling decode.

batch_encode_plus(batch_text_or_text_pairs)

Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.

build_inputs_with_special_tokens(token_ids_0)

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

clean_up_tokenization(out_string)

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.

convert_added_tokens(obj[, save, add_type_field])

convert_tokens_to_string(tokens)

Converts a sequence of tokens in a single string.

create_token_type_ids_from_sequences(token_ids_0)

Create the token type IDs corresponding to the sequences passed.

decode(token_ids[, skip_special_tokens, ...])

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

encode(text[, text_pair, ...])

Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.

encode_plus(text[, text_pair, ...])

Tokenize and prepare for the model a sequence or a pair of sequences.

from_pretrained(model_name_or_path, *args, ...)

Loads a pretrained tokenizer.

get_chat_template([chat_template, tools])

Retrieve the chat template string used for tokenizing chat messages.

get_special_tokens_mask(token_ids_0[, ...])

Retrieves sequence ids from a token list that has no special tokens added.

get_vocab()

Returns the vocabulary as a dictionary of token to index.

num_special_tokens_to_add([pair])

pad(encoded_inputs[, padding, max_length, ...])

Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.

prepare_for_model(ids[, pair_ids, ...])

Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model.

prepare_seq2seq_batch(src_texts[, ...])

Prepare model inputs for translation.

push_to_hub(repo_id[, use_temp_dir, ...])

Upload the tokenizer files to the 🤗 Model Hub.

register_for_auto_class([auto_class])

Register this class with a given auto class.

sanitize_special_tokens()

The sanitize_special_tokens is now deprecated kept for backward compatibility and will be removed in transformers v5.

save_pretrained(save_directory[, ...])

Save the full tokenizer state.

save_vocabulary(save_directory[, ...])

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

tokenize([queries, docs])

Tokenizes queries and documents.

tokenize_doc(docs, *args, **kwargs)

Tokenizes input documents.

tokenize_query(queries, *args, **kwargs)

Tokenizes input queries.

truncate_sequences(ids[, pair_ids, ...])

Truncates a sequence pair in-place following the strategy.

Attributes

DOC_TOKEN

Token to mark a document sequence.

QUERY_TOKEN

Token to mark a query sequence.

SPECIAL_TOKENS_ATTRIBUTES

added_tokens_decoder

all_special_ids

List the ids of the special tokens('<unk>', '<cls>', etc.) mapped to class attributes.

all_special_tokens

A list of the unique special tokens ('<unk>', '<cls>', ..., etc.).

all_special_tokens_extended

All the special tokens ('<unk>', '<cls>', etc.), the order has nothing to do with the index of each tokens.

doc_token_id

The token id of the document token if marker tokens are added.

max_len_sentences_pair

The maximum combined length of a pair of sentences that can be fed to the model.

max_len_single_sentence

The maximum length of a sentence that can be fed to the model.

model_input_names

pad_token_type_id

Id of the padding token type in the vocabulary.

padding_side

pretrained_vocab_files_map

query_token_id

The token id of the query token if marker tokens are added.

slow_tokenizer_class

special_tokens_map

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values ('<unk>', '<cls>', etc.).

special_tokens_map_extended

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values ('<unk>', '<cls>', etc.).

truncation_side

vocab_files_names

DOC_TOKEN: str = '[DOC]'

Token to mark a document sequence.

QUERY_TOKEN: str = '[QUE]'

Token to mark a query sequence.

add_special_tokens(special_tokens_dict: Dict[str, str | AddedToken], replace_additional_special_tokens=True) int

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the [~PreTrainedModel.resize_token_embeddings] method.

Using add_special_tokens will ensure your special tokens can be used in several ways:

  • Special tokens can be skipped when decoding using skip_special_tokens = True.

  • Special tokens are carefully handled by the tokenizer (they are never split), similar to AddedTokens.

  • You can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.

When possible, special tokens are already registered for provided pretrained models (for instance [BertTokenizer] cls_token is already registered to be :obj*’[CLS]’* and XLM’s one is also registered to be ‘</s>’).

Parameters:
  • special_tokens_dict (dictionary str to str or tokenizers.AddedToken) –

    Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens].

    Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).

  • replace_additional_special_tokens (bool, optional,, defaults to True) – If True, the existing list of additional special tokens will be replaced by the list provided in special_tokens_dict. Otherwise, self._special_tokens_map[“additional_special_tokens”] is just extended. In the former case, the tokens will NOT be removed from the tokenizer’s full vocabulary - they are only being flagged as non-special tokens. Remember, this only affects which tokens are skipped during decoding, not the added_tokens_encoder and added_tokens_decoder. This means that the previous additional_special_tokens are still added tokens, and will not be split by the model.

Returns:

Number of tokens added to the vocabulary.

Return type:

int

Examples:

```python # Let’s see how to add a new classification token to GPT-2 tokenizer = GPT2Tokenizer.from_pretrained(“openai-community/gpt2”) model = GPT2Model.from_pretrained(“openai-community/gpt2”)

special_tokens_dict = {“cls_token”: “<CLS>”}

num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print(“We have added”, num_added_toks, “tokens”) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer))

assert tokenizer.cls_token == “<CLS>” ```

add_tokens(new_tokens: str | AddedToken | List[str | AddedToken], special_tokens: bool = False) int

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.

Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the [~PreTrainedModel.resize_token_embeddings] method.

Parameters:
  • new_tokens (str, tokenizers.AddedToken or a list of str or tokenizers.AddedToken) – Tokens are only added if they are not already in the vocabulary. tokenizers.AddedToken wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc.

  • special_tokens (bool, optional, defaults to False) –

    Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).

    See details for tokenizers.AddedToken in HuggingFace tokenizers library.

Returns:

Number of tokens added to the vocabulary.

Return type:

int

Examples:

```python # Let’s see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizerFast.from_pretrained(“google-bert/bert-base-uncased”) model = BertModel.from_pretrained(“google-bert/bert-base-uncased”)

num_added_toks = tokenizer.add_tokens([“new_tok1”, “my_new-tok2”]) print(“We have added”, num_added_toks, “tokens”) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer)) ```

property all_special_ids: List[int]

List the ids of the special tokens(‘<unk>’, ‘<cls>’, etc.) mapped to class attributes.

Type:

List[int]

property all_special_tokens: List[str]

A list of the unique special tokens (‘<unk>’, ‘<cls>’, …, etc.).

Convert tokens of tokenizers.AddedToken type to string.

Type:

List[str]

property all_special_tokens_extended: List[str | AddedToken]

All the special tokens (‘<unk>’, ‘<cls>’, etc.), the order has nothing to do with the index of each tokens. If you want to know the correct indices, check self.added_tokens_encoder. We can’t create an order anymore as the keys are AddedTokens and not Strings.

Don’t convert tokens of tokenizers.AddedToken type to string so they can be used to control more finely how special tokens are tokenized.

Type:

List[Union[str, tokenizers.AddedToken]]

apply_chat_template(conversation: List[Dict[str, str]] | List[List[Dict[str, str]]], tools: List[Dict | Callable] | None = None, documents: List[Dict[str, str]] | None = None, chat_template: str | None = None, add_generation_prompt: bool = False, continue_final_message: bool = False, tokenize: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool = False, max_length: int | None = None, return_tensors: str | TensorType | None = None, return_dict: bool = False, return_assistant_tokens_mask: bool = False, tokenizer_kwargs: Dict[str, Any] | None = None, **kwargs) str | List[int] | List[str] | List[List[int]] | BatchEncoding

Converts a list of dictionaries with “role” and “content” keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting.

Parameters:
  • conversation (Union[List[Dict[str, str]], List[List[Dict[str, str]]]]) – A list of dicts with “role” and “content” keys, representing the chat history so far.

  • tools (List[Dict], optional) – A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use) for more information.

  • documents (List[Dict[str, str]], optional) – A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing “title” and “text” keys. Please see the RAG section of the [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#arguments-for-RAG) for examples of passing documents with chat templates.

  • chat_template (str, optional) – A Jinja template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model’s template will be used by default.

  • add_generation_prompt (bool, optional) – If this is set, a prompt with the token(s) that indicate the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.

  • continue_final_message (bool, optional) – If this is set, the chat will be formatted so that the final message in the chat is open-ended, without any EOS tokens. The model will continue this message rather than starting a new one. This allows you to “prefill” part of the model’s response for it. Cannot be used at the same time as add_generation_prompt.

  • tokenize (bool, defaults to True) – Whether to tokenize the output. If False, the output will be a string.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Select a strategy to pad the returned sequences (according to the model’s padding side and padding

    index) among:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, defaults to False) – Whether to truncate sequences at the maximum length. Has no effect if tokenize is False.

  • max_length (int, optional) – Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is False. If not specified, the tokenizer’s max_length attribute will be used as a default.

  • return_tensors (str or [~utils.TensorType], optional) – If set, will return tensors of a particular framework. Has no effect if tokenize is False. Acceptable values are: - ‘tf’: Return TensorFlow tf.Tensor objects. - ‘pt’: Return PyTorch torch.Tensor objects. - ‘np’: Return NumPy np.ndarray objects. - ‘jax’: Return JAX jnp.ndarray objects.

  • return_dict (bool, defaults to False) – Whether to return a dictionary with named outputs. Has no effect if tokenize is False.

  • (`Dict[str (tokenizer_kwargs) –

    Any]`, optional): Additional kwargs to pass to the tokenizer.

  • return_assistant_tokens_mask (bool, defaults to False) – Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant, the mask will contain 1. For user and system tokens, the mask will contain 0. This functionality is only available for chat templates that support it via the {% generation %} keyword.

  • **kwargs – Additional kwargs to pass to the template renderer. Will be accessible by the chat template.

Returns:

A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate(). If return_dict is set, will return a dict of tokenizer outputs instead.

Return type:

Union[List[int], Dict]

as_target_tokenizer()

Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to sequence-to-sequence models that need a slightly different processing for the labels.

batch_decode(sequences: List[int] | List[List[int]] | np.ndarray | torch.Tensor | tf.Tensor, skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = None, **kwargs) List[str]

Convert a list of lists of token ids into a list of strings by calling decode.

Parameters:
  • sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) – List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.

  • kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific decode method.

Returns:

The list of decoded sentences.

Return type:

List[str]

batch_encode_plus(batch_text_or_text_pairs: List[str] | List[Tuple[str, str]] | List[List[str]] | List[Tuple[List[str], List[str]]] | List[List[int]] | List[Tuple[List[int], List[int]]], add_special_tokens: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = None, max_length: int | None = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: int | None = None, padding_side: bool | None = None, return_tensors: str | TensorType | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, split_special_tokens: bool = False, **kwargs) BatchEncoding

Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.

<Tip warning={true}>

This method is deprecated, __call__ should be used instead.

</Tip>

Parameters:
  • batch_text_or_text_pairs (List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], and for not-fast tokenizers, also List[List[int]], List[Tuple[List[int], List[int]]]) – Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details in encode_plus).

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • padding_side (str, optional) – The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • return_token_type_ids (bool, optional) –

    Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are token type IDs?](../glossary#token-type-ids)

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are attention masks?](../glossary#attention-mask)

  • return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.

  • return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.

  • return_offsets_mapping (bool, optional, defaults to False) –

    Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.

  • verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.

  • **kwargs – passed to the self.tokenize() method

Returns:

A [BatchEncoding] with the following fields:

  • input_ids – List of token ids to be fed to a model.

    [What are input IDs?](../glossary#input-ids)

  • token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    [What are token type IDs?](../glossary#token-type-ids)

  • attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    [What are attention masks?](../glossary#attention-mask)

  • overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length – The length of the inputs (when return_length=True)

Return type:

[BatchEncoding]

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: List[int] | None = None) List[int]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

This implementation does not add special tokens and this method should be overridden in a subclass.

Parameters:
  • token_ids_0 (List[int]) – The first tokenized sequence.

  • token_ids_1 (List[int], optional) – The second tokenized sequence.

Returns:

The model input with special tokens.

Return type:

List[int]

static clean_up_tokenization(out_string: str) str

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.

Parameters:

out_string (str) – The text to clean up.

Returns:

The cleaned-up string.

Return type:

str

config_class

Configuration class for the tokenizer.

alias of BiEncoderConfig

convert_tokens_to_string(tokens: List[str]) str

Converts a sequence of tokens in a single string. The most simple way to do it is “ “.join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

Parameters:

tokens (List[str]) – The token to join in a string.

Returns:

The joined tokens.

Return type:

str

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: List[int] | None = None) List[int]

Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids)

Should be overridden in a subclass if the model has a special way of building those.

Parameters:
  • token_ids_0 (List[int]) – The first tokenized sequence.

  • token_ids_1 (List[int], optional) – The second tokenized sequence.

Returns:

The token type ids.

Return type:

List[int]

decode(token_ids: int | List[int] | np.ndarray | torch.Tensor | tf.Tensor, skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = None, **kwargs) str

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters:
  • token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) – List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional) – Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.

  • kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific decode method.

Returns:

The decoded sentence.

Return type:

str

property doc_token_id: int | None

The token id of the document token if marker tokens are added.

Returns:

Token id of the document token

Return type:

int | None

encode(text: str | List[str] | List[int], text_pair: str | List[str] | List[int] | None = None, add_special_tokens: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = None, max_length: int | None = None, stride: int = 0, padding_side: bool | None = None, return_tensors: str | TensorType | None = None, **kwargs) List[int]

Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.

Same as doing self.convert_tokens_to_ids(self.tokenize(text)).

Parameters:
  • text (str, List[str] or List[int]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • text_pair (str, List[str] or List[int], optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • padding_side (str, optional) – The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • **kwargs – Passed along to the .tokenize() method.

Returns:

The tokenized ids of the text.

Return type:

List[int], torch.Tensor, tf.Tensor or np.ndarray

encode_plus(text: str | List[str] | List[int], text_pair: str | List[str] | List[int] | None = None, add_special_tokens: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = None, max_length: int | None = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: int | None = None, padding_side: bool | None = None, return_tensors: str | TensorType | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) BatchEncoding

Tokenize and prepare for the model a sequence or a pair of sequences.

<Tip warning={true}>

This method is deprecated, __call__ should be used instead.

</Tip>

Parameters:
  • text (str, List[str] or (for non-fast tokenizers) List[int]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • text_pair (str, List[str] or List[int], optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • padding_side (str, optional) – The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • return_token_type_ids (bool, optional) –

    Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are token type IDs?](../glossary#token-type-ids)

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are attention masks?](../glossary#attention-mask)

  • return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.

  • return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.

  • return_offsets_mapping (bool, optional, defaults to False) –

    Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.

  • verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.

  • **kwargs – passed to the self.tokenize() method

Returns:

A [BatchEncoding] with the following fields:

  • input_ids – List of token ids to be fed to a model.

    [What are input IDs?](../glossary#input-ids)

  • token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    [What are token type IDs?](../glossary#token-type-ids)

  • attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    [What are attention masks?](../glossary#attention-mask)

  • overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length – The length of the inputs (when return_length=True)

Return type:

[BatchEncoding]

classmethod from_pretrained(model_name_or_path: str, *args, **kwargs) LightningIRTokenizer

Loads a pretrained tokenizer. Wraps the transformers.PreTrainedTokenizer.from_pretrained method to return a derived LightningIRTokenizer class. See LightningIRTokenizerClassFactory for more details.

>>> Loading using model class and backbone checkpoint
>>> type(BiEncoderTokenizer.from_pretrained("bert-base-uncased"))
...
<class 'lightning_ir.base.class_factory.BiEncoderBertTokenizerFast'>
>>> Loading using base class and backbone checkpoint
>>> type(LightningIRTokenizer.from_pretrained("bert-base-uncased", config=BiEncoderConfig()))
...
<class 'lightning_ir.base.class_factory.BiEncoderBertTokenizerFast'>
Parameters:

model_name_or_path (str) – Name or path of the pretrained tokenizer

Raises:

ValueError – If called on the abstract class LightningIRTokenizer and no config is passed

Returns:

A derived LightningIRTokenizer consisting of a backbone tokenizer and a LightningIRTokenizer mixin

Return type:

LightningIRTokenizer

get_chat_template(chat_template: str | None = None, tools: List[Dict] | None = None) str

Retrieve the chat template string used for tokenizing chat messages. This template is used internally by the apply_chat_template method and can also be used externally to retrieve the model’s chat template for better generation tracking.

Parameters:
  • chat_template (str, optional) – A Jinja template or the name of a template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model’s template will be used by default.

  • tools (List[Dict], optional) – A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use) for more information.

Returns:

The chat template string.

Return type:

str

get_special_tokens_mask(token_ids_0: List[int], token_ids_1: List[int] | None = None, already_has_special_tokens: bool = False) List[int]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters:
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optional) – List of ids of the second sequence.

  • already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.

Returns:

1 for a special token, 0 for a sequence token.

Return type:

A list of integers in the range [0, 1]

get_vocab() Dict[str, int]

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns:

The vocabulary.

Return type:

Dict[str, int]

property max_len_sentences_pair: int

The maximum combined length of a pair of sentences that can be fed to the model.

Type:

int

property max_len_single_sentence: int

The maximum length of a sentence that can be fed to the model.

Type:

int

pad(encoded_inputs: BatchEncoding | List[BatchEncoding] | Dict[str, List[int]] | Dict[str, List[List[int]]] | List[Dict[str, List[int]]], padding: bool | str | PaddingStrategy = True, max_length: int | None = None, pad_to_multiple_of: int | None = None, padding_side: bool | None = None, return_attention_mask: bool | None = None, return_tensors: str | TensorType | None = None, verbose: bool = True) BatchEncoding

Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.

Padding side (left/right) padding token ids are defined at the tokenizer level (with self.padding_side, self.pad_token_id and self.pad_token_type_id).

Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.

<Tip>

If the encoded_inputs passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the result will use the same type unless you provide a different tensor type with return_tensors. In the case of PyTorch tensors, you will lose the specific device of your tensors however.

</Tip>

Parameters:
  • encoded_inputs ([BatchEncoding], list of [BatchEncoding], Dict[str, List[int]], Dict[str, List[List[int]] or List[Dict[str, List[int]]]) –

    Tokenized inputs. Can represent one input ([BatchEncoding] or Dict[str, List[int]]) or a batch of tokenized inputs (list of [BatchEncoding], Dict[str, List[List[int]]] or List[Dict[str, List[int]]]) so you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.

    Instead of List[int] you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors), see the note above for the return type.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to True) –

    Select a strategy to pad the returned sequences (according to the model’s padding side and padding

    index) among:

    • True or ‘longest’ (default): Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’: No padding (i.e., can output a batch with sequences of different lengths).

  • max_length (int, optional) – Maximum length of the returned list and optionally padding length (see above).

  • pad_to_multiple_of (int, optional) –

    If set will pad the sequence to a multiple of the provided value.

    This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • padding_side (str, optional) – The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are attention masks?](../glossary#attention-mask)

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.

property pad_token_type_id: int

Id of the padding token type in the vocabulary.

Type:

int

prepare_for_model(ids: List[int], pair_ids: List[int] | None = None, add_special_tokens: bool = True, padding: bool | str | PaddingStrategy = False, truncation: bool | str | TruncationStrategy = None, max_length: int | None = None, stride: int = 0, pad_to_multiple_of: int | None = None, padding_side: bool | None = None, return_tensors: str | TensorType | None = None, return_token_type_ids: bool | None = None, return_attention_mask: bool | None = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, prepend_batch_axis: bool = False, **kwargs) BatchEncoding

Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Please Note, for pair_ids different than None and truncation_strategy = longest_first or True, it is not possible to return overflowing tokens. Such a combination of arguments will raise an error.

Parameters:
  • ids (List[int]) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids (List[int], optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • padding_side (str, optional) – The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • return_token_type_ids (bool, optional) –

    Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are token type IDs?](../glossary#token-type-ids)

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    [What are attention masks?](../glossary#attention-mask)

  • return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.

  • return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.

  • return_offsets_mapping (bool, optional, defaults to False) –

    Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.

  • verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.

  • **kwargs – passed to the self.tokenize() method

Returns:

A [BatchEncoding] with the following fields:

  • input_ids – List of token ids to be fed to a model.

    [What are input IDs?](../glossary#input-ids)

  • token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    [What are token type IDs?](../glossary#token-type-ids)

  • attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    [What are attention masks?](../glossary#attention-mask)

  • overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length – The length of the inputs (when return_length=True)

Return type:

[BatchEncoding]

prepare_seq2seq_batch(src_texts: List[str], tgt_texts: List[str] | None = None, max_length: int | None = None, max_target_length: int | None = None, padding: str = 'longest', return_tensors: str = None, truncation: bool = True, **kwargs) BatchEncoding

Prepare model inputs for translation. For best performance, translate one sentence at a time.

Parameters:
  • src_texts (List[str]) – List of documents to summarize or source language texts.

  • tgt_texts (list, optional) – List of summaries or target language texts.

  • max_length (int, optional) – Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • max_target_length (int, optional) – Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set to None, this will use the max_length value.

  • padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • ’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).

  • return_tensors (str or [~utils.TensorType], optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • ’tf’: Return TensorFlow tf.constant objects.

    • ’pt’: Return PyTorch torch.Tensor objects.

    • ’np’: Return Numpy np.ndarray objects.

  • truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to True) –

    Activates and controls truncation. Accepts the following values:

    • True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • **kwargs – Additional keyword arguments passed along to self.__call__.

Returns:

A [BatchEncoding] with the following fields:

  • input_ids – List of token ids to be fed to the encoder.

  • attention_mask – List of indices specifying which tokens should be attended to by the model.

  • labels – List of token ids for tgt_texts.

The full set of keys [input_ids, attention_mask, labels], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.

Return type:

[BatchEncoding]

push_to_hub(repo_id: str, use_temp_dir: bool | None = None, commit_message: str | None = None, private: bool | None = None, token: bool | str | None = None, max_shard_size: int | str | None = '5GB', create_pr: bool = False, safe_serialization: bool = True, revision: str = None, commit_description: str = None, tags: List[str] | None = None, **deprecated_kwargs) str

Upload the tokenizer files to the 🤗 Model Hub.

Parameters:
  • repo_id (str) – The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization.

  • use_temp_dir (bool, optional) – Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.

  • commit_message (str, optional) – Message to commit while pushing. Will default to “Upload tokenizer”.

  • private (bool, optional) – Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists.

  • token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

  • max_shard_size (int or str, optional, defaults to “5GB”) – Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”). We default it to “5GB” so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues.

  • create_pr (bool, optional, defaults to False) – Whether or not to create a PR with the uploaded files or directly commit.

  • safe_serialization (bool, optional, defaults to True) – Whether or not to convert the model weights in safetensors format for safer serialization.

  • revision (str, optional) – Branch to push the uploaded files to.

  • commit_description (str, optional) – The description of the commit that will be created

  • tags (List[str], optional) – List of tags to push on the Hub.

Examples:

```python from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(“google-bert/bert-base-cased”)

# Push the tokenizer to your namespace with the name “my-finetuned-bert”. tokenizer.push_to_hub(“my-finetuned-bert”)

# Push the tokenizer to an organization with the name “my-finetuned-bert”. tokenizer.push_to_hub(“huggingface/my-finetuned-bert”) ```

property query_token_id: int | None

The token id of the query token if marker tokens are added.

Returns:

Token id of the query token

Return type:

int | None

classmethod register_for_auto_class(auto_class='AutoTokenizer')

Register this class with a given auto class. This should only be used for custom tokenizers as the ones in the library are already mapped with AutoTokenizer.

<Tip warning={true}>

This API is experimental and may have some slight breaking changes in the next releases.

</Tip>

Parameters:

auto_class (str or type, optional, defaults to “AutoTokenizer”) – The auto class to register this new tokenizer with.

sanitize_special_tokens() int

The sanitize_special_tokens is now deprecated kept for backward compatibility and will be removed in transformers v5.

save_pretrained(save_directory: str | PathLike, legacy_format: bool | None = None, filename_prefix: str | None = None, push_to_hub: bool = False, **kwargs) Tuple[str]

Save the full tokenizer state.

This method make sure the full tokenizer can then be re-loaded using the [~tokenization_utils_base.PreTrainedTokenizer.from_pretrained] class method..

Warning,None This won’t save modifications you may have applied to the tokenizer after the instantiation (for instance, modifying tokenizer.do_lower_case after creation).

Parameters:
  • save_directory (str or os.PathLike) – The path to a directory where the tokenizer will be saved.

  • legacy_format (bool, optional) –

    Only applicable for a fast tokenizer. If unset (default), will save the tokenizer in the unified JSON format as well as in legacy format if it exists, i.e. with tokenizer specific vocabulary and a separate added_tokens files.

    If False, will only save the tokenizer in the unified JSON format. This format is incompatible with “slow” tokenizers (not powered by the tokenizers library), so the tokenizer will not be able to be loaded in the corresponding “slow” tokenizer.

    If True, will save the tokenizer in legacy format. If the “slow” tokenizer doesn’t exits, a value error is raised.

  • filename_prefix (str, optional) – A prefix to add to the names of the files saved by the tokenizer.

  • push_to_hub (bool, optional, defaults to False) – Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).

  • kwargs (Dict[str, Any], optional) – Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.

Returns:

The files saved.

Return type:

A tuple of str

save_vocabulary(save_directory: str, filename_prefix: str | None = None) Tuple[str]

Save only the vocabulary of the tokenizer (vocabulary + added tokens).

This method won’t save the configuration and special token mappings of the tokenizer. Use [~PreTrainedTokenizerFast._save_pretrained] to save the whole state of the tokenizer.

Parameters:
  • save_directory (str) – The directory in which to save the vocabulary.

  • filename_prefix (str, optional) – An optional prefix to add to the named of the saved files.

Returns:

Paths to the files saved.

Return type:

Tuple(str)

property special_tokens_map: Dict[str, str | List[str]]

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values (‘<unk>’, ‘<cls>’, etc.).

Convert potential tokens of tokenizers.AddedToken type to string.

Type:

Dict[str, Union[str, List[str]]]

property special_tokens_map_extended: Dict[str, str | AddedToken | List[str | AddedToken]]

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values (‘<unk>’, ‘<cls>’, etc.).

Don’t convert tokens of tokenizers.AddedToken type to string so they can be used to control more finely how special tokens are tokenized.

Type:

Dict[str, Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]]]

tokenize(queries: str | Sequence[str] | None = None, docs: str | Sequence[str] | None = None, **kwargs) Dict[str, BatchEncoding][source]

Tokenizes queries and documents.

Parameters:
  • queries (str | Sequence[str] | None, optional) – Queries to tokenize, defaults to None

  • docs (str | Sequence[str] | None, optional) – Documents to tokenize, defaults to None

Returns:

Dictionary of tokenized queries and documents

Return type:

Dict[str, BatchEncoding]

tokenize_doc(docs: Sequence[str] | str, *args, **kwargs) BatchEncoding[source]

Tokenizes input documents.

Parameters:

docs (Sequence[str] | str) – Document or documents to tokenize

Returns:

Tokenized documents

Return type:

BatchEncoding

tokenize_query(queries: Sequence[str] | str, *args, **kwargs) BatchEncoding[source]

Tokenizes input queries.

Parameters:

queries (Sequence[str] | str) – Query or queries to tokenize

Returns:

Tokenized queries

Return type:

BatchEncoding

truncate_sequences(ids: List[int], pair_ids: List[int] | None = None, num_tokens_to_remove: int = 0, truncation_strategy: str | TruncationStrategy = 'longest_first', stride: int = 0) Tuple[List[int], List[int], List[int]]

Truncates a sequence pair in-place following the strategy.

Parameters:
  • ids (List[int]) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids (List[int], optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • num_tokens_to_remove (int, optional, defaults to 0) – Number of tokens to remove using the truncation strategy.

  • truncation_strategy (str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to ‘longest_first’) –

    The strategy to follow for truncation. Can be:

    • ’longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • ’do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • stride (int, optional, defaults to 0) – If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

Returns:

The truncated ids, the truncated pair_ids and the list of overflowing tokens. Note: The longest_first strategy returns empty list of overflowing tokens if a pair of sequences (or a batch of pairs) is provided.

Return type:

Tuple[List[int], List[int], List[int]]