Skip to content

Fix typo: indinces -> indices #8159

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Oct 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 12 additions & 14 deletions src/transformers/pipelines.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option

Args:
targeted_task (:obj:`Dict` ):
Dictionnary representing the given task, that should contain default models
Dictionary representing the given task, that should contain default models

framework (:obj:`str`, None)
"pt", "tf" or None, representing a specific framework if it was specified, or None if we don't know yet.
Expand Down Expand Up @@ -150,9 +150,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option
else:
# XXX This error message needs to be updated to be more generic if more tasks are going to become
# parametrized
raise ValueError(
'The task defaults can\'t be correctly selectionned. You probably meant "translation_XX_to_YY"'
)
raise ValueError('The task defaults can\'t be correctly selected. You probably meant "translation_XX_to_YY"')

if framework is None:
framework = "pt"
Expand Down Expand Up @@ -695,7 +693,7 @@ def _forward(self, inputs, return_tensors=False):
Internal framework specific forward dispatching

Args:
inputs: dict holding all the keyworded arguments for required by the model forward method.
inputs: dict holding all the keyword arguments for required by the model forward method.
return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array

Returns:
Expand Down Expand Up @@ -874,7 +872,7 @@ def __call__(
args (:obj:`str` or :obj:`List[str]`):
One or several prompts (or one list of prompts) to complete.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs.
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Expand Down Expand Up @@ -1710,7 +1708,7 @@ def __call__(self, *args, **kwargs):
question (:obj:`str` or :obj:`List[str]`):
One or several question(s) (must be used in conjunction with the :obj:`context` argument).
context (:obj:`str` or :obj:`List[str]`):
One or several context(s) associated with the qustion(s) (must be used in conjunction with the
One or several context(s) associated with the question(s) (must be used in conjunction with the
:obj:`question` argument).
topk (:obj:`int`, `optional`, defaults to 1):
The number of answers to return (will be chosen by order of likelihood).
Expand Down Expand Up @@ -1959,7 +1957,7 @@ def __call__(
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs.
Whether or not to include the tensors of predictions (as token indices) in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to clean up the potential extra spaces in the text output.
generate_kwargs:
Expand Down Expand Up @@ -2077,7 +2075,7 @@ def __call__(
args (:obj:`str` or :obj:`List[str]`):
Texts to be translated.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs.
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Expand Down Expand Up @@ -2188,7 +2186,7 @@ def __call__(
args (:obj:`str` or :obj:`List[str]`):
Input text for the encoder.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs.
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Expand Down Expand Up @@ -2253,8 +2251,8 @@ class Conversation:
:class:`~transformers.ConversationalPipeline`. The conversation contains a number of utility function to manage the
addition of new user input and generated model responses. A conversation needs to contain an unprocessed user input
before being passed to the :class:`~transformers.ConversationalPipeline`. This user input is either created when
the class is instantiated, or by calling :obj:`conversional_pipeline.append_response("input")` after a conversation
turn.
the class is instantiated, or by calling :obj:`conversational_pipeline.append_response("input")` after a
conversation turn.

Arguments:
text (:obj:`str`, `optional`):
Expand Down Expand Up @@ -2671,8 +2669,8 @@ def check_task(task: str) -> Tuple[Dict, Any]:
- :obj:`"conversational"`

Returns:
(task_defaults:obj:`dict`, task_options: (:obj:`tuple`, None)) The actual dictionnary required to initialize
the pipeline and some extra task options for parametrized tasks like "translation_XX_to_YY"
(task_defaults:obj:`dict`, task_options: (:obj:`tuple`, None)) The actual dictionary required to initialize the
pipeline and some extra task options for parametrized tasks like "translation_XX_to_YY"


"""
Expand Down
2 changes: 1 addition & 1 deletion src/transformers/tokenization_t5.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ class T5Tokenizer(PreTrainedTokenizer):
extra_ids (:obj:`int`, `optional`, defaults to 100):
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are
indexed from the end of the vocabulary up to beginnning ("<extra_id_0>" is the last token in the vocabulary
indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary
like in T5 preprocessing see `here
<https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__).
additional_special_tokens (:obj:`List[str]`, `optional`):
Expand Down
2 changes: 1 addition & 1 deletion src/transformers/tokenization_t5_fast.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ class T5TokenizerFast(PreTrainedTokenizerFast):
extra_ids (:obj:`int`, `optional`, defaults to 100):
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are
indexed from the end of the vocabulary up to beginnning ("<extra_id_0>" is the last token in the vocabulary
indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary
like in T5 preprocessing see `here
<https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__).
additional_special_tokens (:obj:`List[str]`, `optional`):
Expand Down