Skip to content

Commit 924c0de

Browse files
committed
sweeep ready for tests
1 parent b05aa77 commit 924c0de

File tree

7 files changed

+3
-78
lines changed

7 files changed

+3
-78
lines changed

docs/source/en/internal/generation_utils.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -198,18 +198,6 @@ A [`StoppingCriteria`] can be used to change when to stop generation (other than
198198
[[autodoc]] EosTokenCriteria
199199
- __call__
200200

201-
## Constraints
202-
203-
A [`Constraint`] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations.
204-
205-
[[autodoc]] Constraint
206-
207-
[[autodoc]] PhrasalConstraint
208-
209-
[[autodoc]] DisjunctiveConstraint
210-
211-
[[autodoc]] ConstraintListState
212-
213201
## Streamers
214202

215203
[[autodoc]] TextStreamer

docs/source/ja/internal/generation_utils.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -300,18 +300,6 @@ generation_output[:2]
300300
[[autodoc]] MaxTimeCriteria
301301
- __call__
302302

303-
## Constraints
304-
305-
[`Constraint`] を使用すると、生成時に出力に特定のトークンまたはシーケンスが含まれるように強制できます。これは PyTorch 実装でのみ利用可能であることに注意してください。
306-
307-
[[autodoc]] Constraint
308-
309-
[[autodoc]] PhrasalConstraint
310-
311-
[[autodoc]] DisjunctiveConstraint
312-
313-
[[autodoc]] ConstraintListState
314-
315303
## Streamers
316304

317305
[[autodoc]] TextStreamer

docs/source/ko/internal/generation_utils.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -305,18 +305,6 @@ generation_output[:2]
305305
[[autodoc]] EosTokenCriteria
306306
- __call__
307307

308-
## Constraint [[transformers.Constraint]]
309-
310-
[`Constraint`]는 생성 출력에 특정 토큰이나 시퀀스를 강제로 포함시키는 데 사용됩니다. 이 기능은 PyTorch 구현에만 제공됩니다.
311-
312-
[[autodoc]] Constraint
313-
314-
[[autodoc]] PhrasalConstraint
315-
316-
[[autodoc]] DisjunctiveConstraint
317-
318-
[[autodoc]] ConstraintListState
319-
320308
## 스트리머 (Streamers) [[transformers.TextStreamer]]
321309

322310
[[autodoc]] TextStreamer

docs/source/zh/internal/generation_utils.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -295,18 +295,6 @@ generation_output[:2]
295295
[[autodoc]] MaxTimeCriteria
296296
- __call__
297297

298-
## Constraints
299-
300-
可以使用[`Constraint`]来强制生成结果包含输出中的特定tokens或序列。请注意,这仅适用于我们的PyTorch实现。
301-
302-
[[autodoc]] Constraint
303-
304-
[[autodoc]] PhrasalConstraint
305-
306-
[[autodoc]] DisjunctiveConstraint
307-
308-
[[autodoc]] ConstraintListState
309-
310298
## Streamers
311299

312300
[[autodoc]] TextStreamer

src/transformers/generation/beam_constraints.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
logger = logging.get_logger(__name__)
88

99

10+
# TODO joao, manuel: remove in v4.58.0
1011
class Constraint(ABC):
1112
r"""Abstract base class for all constraints that can be applied during generation.
1213
It must define how the constraint can be satisfied.

src/transformers/generation/configuration_utils.py

Lines changed: 2 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,6 @@ class GenerationConfig(PushToHubMixin):
8989
- *multinomial sampling* if `num_beams=1` and `do_sample=True`
9090
- *beam-search decoding* if `num_beams>1` and `do_sample=False`
9191
- *beam-search multinomial sampling* if `num_beams>1` and `do_sample=True`
92-
- *constrained beam-search decoding* if `constraints!=None` or `force_words_ids!=None`
9392
- *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()`
9493
9594
To learn more about decoding strategies refer to the [text generation strategies guide](../generation_strategies).
@@ -202,18 +201,10 @@ class GenerationConfig(PushToHubMixin):
202201
bad_words_ids (`list[list[int]]`, *optional*):
203202
List of list of token ids that are not allowed to be generated. Check
204203
[`~generation.NoBadWordsLogitsProcessor`] for further documentation and examples.
205-
force_words_ids (`list[list[int]]` or `list[list[list[int]]]`, *optional*):
206-
List of token ids that must be generated. If given a `list[list[int]]`, this is treated as a simple list of
207-
words that must be included, the opposite to `bad_words_ids`. If given `list[list[list[int]]]`, this
208-
triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), where one
209-
can allow different forms of each word.
210204
renormalize_logits (`bool`, *optional*, defaults to `False`):
211205
Whether to renormalize the logits after applying all the logits processors (including the custom
212206
ones). It's highly recommended to set this flag to `True` as the search algorithms suppose the score logits
213207
are normalized but some logit processors break the normalization.
214-
constraints (`list[Constraint]`, *optional*):
215-
Custom constraints that can be added to the generation to ensure that the output will contain the use of
216-
certain tokens as defined by `Constraint` objects, in the most sensible way possible.
217208
forced_bos_token_id (`int`, *optional*, defaults to `model.config.forced_bos_token_id`):
218209
The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for
219210
multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target
@@ -374,9 +365,7 @@ def __init__(self, **kwargs):
374365
self.length_penalty = kwargs.pop("length_penalty", 1.0)
375366
self.no_repeat_ngram_size = kwargs.pop("no_repeat_ngram_size", 0)
376367
self.bad_words_ids = kwargs.pop("bad_words_ids", None)
377-
self.force_words_ids = kwargs.pop("force_words_ids", None)
378368
self.renormalize_logits = kwargs.pop("renormalize_logits", False)
379-
self.constraints = kwargs.pop("constraints", None)
380369
self.forced_bos_token_id = kwargs.pop("forced_bos_token_id", None)
381370
self.forced_eos_token_id = kwargs.pop("forced_eos_token_id", None)
382371
self.remove_invalid_values = kwargs.pop("remove_invalid_values", False)
@@ -434,6 +423,8 @@ def __init__(self, **kwargs):
434423
self.dola_layers = kwargs.pop("dola_layers", None)
435424
self.diversity_penalty = kwargs.pop("diversity_penalty", 0.0)
436425
self.num_beam_groups = kwargs.pop("num_beam_groups", 1)
426+
self.constraints = kwargs.pop("constraints", None)
427+
self.force_words_ids = kwargs.pop("force_words_ids", None)
437428

438429
# The remaining attributes do not parametrize `.generate()`, but are informative and/or used by the hub
439430
# interface.
@@ -625,24 +616,6 @@ def validate(self, strict=False):
625616
minor_issues["length_penalty"] = single_beam_wrong_parameter_msg.format(
626617
flag_name="length_penalty", flag_value=self.length_penalty
627618
)
628-
if self.constraints is not None:
629-
minor_issues["constraints"] = single_beam_wrong_parameter_msg.format(
630-
flag_name="constraints", flag_value=self.constraints
631-
)
632-
633-
# 2.3. detect incorrect parameterization specific to advanced beam modes
634-
else:
635-
# constrained beam search
636-
if self.constraints is not None or self.force_words_ids is not None:
637-
constrained_wrong_parameter_msg = (
638-
"one of `constraints`, `force_words_ids` is not `None`, triggering constrained beam search. "
639-
"However, `{flag_name}` is set to `{flag_value}`, which is incompatible with this generation "
640-
"mode. Set `constraints` and `force_words_ids` to `None` or unset `{flag_name}` to continue."
641-
)
642-
if self.do_sample is True:
643-
raise ValueError(
644-
constrained_wrong_parameter_msg.format(flag_name="do_sample", flag_value=self.do_sample)
645-
)
646619

647620
# 2.4. check `num_return_sequences`
648621
if self.num_return_sequences != 1:

src/transformers/generation/utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,6 @@ class GenerationMixin(ContinuousMixin):
367367
- *multinomial sampling* if `num_beams=1` and `do_sample=True`
368368
- *beam-search decoding* if `num_beams>1` and `do_sample=False`
369369
- *beam-search multinomial sampling* if `num_beams>1` and `do_sample=True`
370-
- *constrained beam-search decoding* if `constraints!=None` or `force_words_ids!=None`
371370
- *assisted decoding* if `assistant_model` or `prompt_lookup_num_tokens` is passed to `.generate()`
372371
373372
To learn more about decoding strategies refer to the [text generation strategies guide](../generation_strategies).

0 commit comments

Comments
 (0)