Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancing SFT Training Efficiency Using Packing and FlashAttention2 with Position IDs #31629

Merged
merged 13 commits into from
Jul 23, 2024

Conversation

RhuiDih
Copy link
Contributor

@RhuiDih RhuiDih commented Jun 26, 2024

What does this PR do?

Improve throughput as well as train time and memory utilization for instruction tuning by enabling padding-free and attention mask-free attention.

Specifically, this PR adds the capability to utilize position_ids in FlashAttention2 _flash_attention_forward() in the case of packing (attention_mask=None) to models which use position_ids in their respective DecoderLayer implementations.

This PR also adds a new off-the-shelf data collator DataCollatorWithFlattening which packs the examples in a mini batch into one long sequence and return position_ids as well and turns the first token of labels to -100 to prevent the last token of previous example predicting the first token of next example.

This enables the following:

  1. Use of packing for instruction tuning without incorrect cross-example attention
  2. Significant increase in training throughput and reduction in memory utilization

Example Result 1
dataset: OrcaMath subset
setup: FSDP with 8 GPUs

Model DataProcess Time Throughput (token/s) Memory (MB)
Llama2-7B Padding 790 1269 22305
Llama2-7B ThisPR 574 1746 20950
Mistral-7B Padding 812 1216 23603
Mistral-7B ThisPR 596 1658 22409

Example Result 2
dataset: FLAN subset
setup: FSDP with 8 GPUs

Model DataProcess Time Throughput (token/s) Memory (MB)
Llama2-7B Padding 1526 771 29234
Llama2-7B ThisPR 809 1455 23854
Mistral-7B Padding 742 742 30625
Mistral-7B ThisPR 1408 1408 24549

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Models:

Edit - add more description on data collator

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I think we can wait a tad bit, #31446 is almost ready for merge!

attn_output = flash_attn_func(
query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
)
if (position_ids[:,-1]==position_ids.size(1)-1).all():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is an input dependent control flow which won't be supported by compile, but compile is already not supported.
This needs a comment: we are checking that the input is not padded, and that we are doing prefill right?

@@ -49,6 +49,7 @@
)
from .configuration_gemma import GemmaConfig

from ..llama.modeling_llama import prepare_fa2_from_position_ids
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we never import from another modeling code in transformers, we use "copied from" or we define the general function in falsh_attention_utils for example

@@ -59,6 +59,20 @@
_CONFIG_FOR_DOC = "LlamaConfig"


def prepare_fa2_from_position_ids(query, key, value, position_ids, query_length):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this needs to be documented to explain what is happening

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Jul 12, 2024

@ArthurZucker #31446 has just merged, let me rebase and update this PR

@RhuiDih RhuiDih force-pushed the dev/fa_packing_posid branch from cf6271f to c3451db Compare July 12, 2024 07:00
@RhuiDih
Copy link
Contributor Author

RhuiDih commented Jul 12, 2024

@ArthurZucker Rebase is done.

I have move the codes to modeling_flash_attention_utils.py and added required comments/description.

Regarding the control flow, ideally the cu_seq_len should be prepared in data collator and model should accept it as arg to trigger flash_attn_varlen_func without relying on position_ids, but this is the quick & fast way to do padding free.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM! Only thing missing is a test!
Testing this codepath in particuler, + datacollator in the trainer tests maybe?

Comment on lines 1620 to 1624
warnings.warn(
"Using `DataCollatorForBatchFlattening` will flatten the entire mini batch into single long sequence."
"Make sure your attention computation is able to handles it!"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we keep track of a list of models that support FA2, if needed for guidance here!


# if position_ids is provided and check not all examples (row) contain only 1 sequence,
# then use `flash_attn_varlen_func` to prevent cross-example attention and also allow padding free approach
elif position_ids is not None and not (position_ids[:,-1]==position_ids.size(1)-1).all():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is an input dependent control flow, but we already don't support compile with FA2 AFAIK.
Alright for now!

@mayank31398
Copy link
Contributor

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Jul 17, 2024

@mayank31398
it can be done within the model or when labels preparation in data collator

if is_labels_provided:
ret["labels"] += [-100] + features[idx]["labels"][1:]
else:
ret["labels"] += [-100] + features[idx]["input_ids"][1:]

@wynterl
Copy link

wynterl commented Jul 17, 2024

Hello @mayank31398 We are providing the capability to define the labels as the user wishes in the data collator in this PR via transformers/src/transformers/data/data_collator.py

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Jul 17, 2024

@ArthurZucker
I have added tests for the new data collator and the models which already have FA2 tests in place. For the trainer, i can't find existing FA2 + Trainer testing combo, could you point me there ?

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I just want a second look from our @fxmarty .
This should improve performances for everyone. but let's make sure we don't break BC! WDYT @fxmarty ?

@ArthurZucker
Copy link
Collaborator

No worries I think the tests you added are enough

Copy link
Contributor

@fxmarty fxmarty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

I strongly think this should be documented somewhere, maybe https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#flash-attention-2.

Comment on lines 586 to 590
@require_flash_attn
@require_torch_gpu
@slow
def test_flash_attention_2_padding_matches_padding_free_with_position_ids(self):
super().test_flash_attention_2_padding_matches_padding_free_with_position_ids()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this is already in test_modeling_common.py, adding this here is not needed.

Comment on lines 529 to 534
@require_flash_attn
@require_torch_gpu
@slow
def test_flash_attention_2_padding_matches_padding_free_with_position_ids(self):
super().test_flash_attention_2_padding_matches_padding_free_with_position_ids()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for all

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have removed them in latest commit

Comment on lines 4365 to 4377
# flatten
padfree_inputs_dict = {
k:v[dummy_attention_mask.bool()].unsqueeze(0) for k,v in inputs_dict.items() if not k=="attention_mask"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a big fan as having unpadded inputs as [1, total_seqlens]. To me, it would make more sense to have support for one dimensional tensors as input_ids ([total_tokens]), which becomes after embedding inputs_embeds ([total_tokens, hidden_size]), as anyway flash attention varlen frontend expects 3D tensors, not 4D.

Thus

    query = query.view(-1, query.size(-2), query.size(-1))
    key = key.view(-1, key.size(-2), key.size(-1))
    value = value.view(-1, value.size(-2), value.size(-1))

would not be needed, and neither calculating the cumulative seqlens at each layer. This maybe can be left for an other PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very much agree it is less than ideal that cu_seq_len is calculated again in every layers. It can be done in data_collator or in the first forward() and passed along all DecoderLayer, not sure how it links to switching from [bs, seq_len, ... ] to just [seq_len, ...]. This removal of bs dimension requires substantial change to the model implementation, should HF decide to do this switch, those 3 lines can be just removed.

Comment on lines +1621 to +1624
warnings.warn(
"Using `DataCollatorWithFlattening` will flatten the entire mini batch into single long sequence."
"Make sure your attention computation is able to handle it!"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How should a user make sure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no easy way, the user will have to make sure the model supports FA2 and have position_ids
until the model accepting cu_seq_len which will allow every model remove padding once for all, i think this serve as a good warning

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@RhuiDih RhuiDih force-pushed the dev/fa_packing_posid branch from 7573f85 to 00e7abf Compare July 23, 2024 10:42
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀 nice getting this done!

@ArthurZucker ArthurZucker merged commit 9cf4f2a into huggingface:main Jul 23, 2024
23 checks passed
MHRDYN7 pushed a commit to MHRDYN7/transformers that referenced this pull request Jul 23, 2024
…ith Position IDs (huggingface#31629)

* add DataCollatorBatchFlattening

* Update data_collator.py

* change name

* new FA2 flow if position_ids is provided

* add comments

* minor fix

* minor fix data collator

* add test cases for models

* add test case for data collator

* remove extra code

* formating for ruff check and check_repo.py

* ruff format

ruff format tests src utils

* custom_init_isort.py
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request Jul 24, 2024
…ith Position IDs (huggingface#31629)

* add DataCollatorBatchFlattening

* Update data_collator.py

* change name

* new FA2 flow if position_ids is provided

* add comments

* minor fix

* minor fix data collator

* add test cases for models

* add test case for data collator

* remove extra code

* formating for ruff check and check_repo.py

* ruff format

ruff format tests src utils

* custom_init_isort.py
itazap pushed a commit that referenced this pull request Jul 25, 2024
…ith Position IDs (#31629)

* add DataCollatorBatchFlattening

* Update data_collator.py

* change name

* new FA2 flow if position_ids is provided

* add comments

* minor fix

* minor fix data collator

* add test cases for models

* add test case for data collator

* remove extra code

* formating for ruff check and check_repo.py

* ruff format

ruff format tests src utils

* custom_init_isort.py
qubvel added a commit to qubvel/transformers that referenced this pull request Aug 6, 2024
commit 37c5ca5eb9012a1009cf23b892828902f6a8799a
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Tue Aug 6 10:24:19 2024 +0500

    Cache: create docs (#32150)

    * draft

    * updates

    * works?

    * try adding python example in hidden section

    * another try

    * hwo do i render python

    * format as html code?

    * Update docs/source/en/kv_cache.md

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

    * Update docs/source/en/kv_cache.md

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

    * Update docs/source/en/kv_cache.md

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

    * Update docs/source/en/kv_cache.md

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

    * Update docs/source/en/kv_cache.md

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

    * one more small update

    * should render hidden secrtion now

    * add outputs

    * fix links

    * check links

    * update all links

    * update with offloaded cache

    * all cache is importable, so they appear in docs

    * fix copies

    * docstring...

    ---------

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

commit 13dc6b0853c3cb54e79b18105c0528bc9e84881c
Author: Francisco Kurucz <juanfkurucz@gmail.com>
Date:   Mon Aug 5 19:14:50 2024 -0300

    Fix documentation links and code reference to model llava-next (#32434)

commit 7e5d46ded433605a906fcab6be43ac85307cca9b
Author: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Date:   Mon Aug 5 16:33:19 2024 +0100

    Respect the config's attn_implementation if set (#32383)

    * Respect the config's attn if set

    * Update test - can override in from_config

    * Fix

commit 458b0cd2c544cdd6c700f9b0c21077c889bcee6c
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Mon Aug 5 19:49:42 2024 +0530

    fix: Updated `test_embeded_special_tokens` for luke and mluke models (#32413)

    Fixed tokenizertests for luke, mluke models.

commit baf7e5c927744122c89ab1270c6c312541c7eb41
Author: Abdi <48970896+AbdiHaryadi@users.noreply.github.com>
Date:   Mon Aug 5 21:15:36 2024 +0800

    Persist embedding type of BART and mBART models after resize (#32242)

    * fix: persist embedding type of MBartConditonalGeneration after resize

    * fix: persist embedding type of BartConditonalGeneration after resize

commit f5f1e52f6cf13cdf63ff25c311d33e2f2a842911
Author: Francisco Kurucz <juanfkurucz@gmail.com>
Date:   Mon Aug 5 05:18:28 2024 -0300

    Fix documentation references to google/bit-50 model (#32407)

commit ea5da52ebc062ff56f0e3aa05b0e3cc981731e14
Author: Nicholas Broad <nbroad94@gmail.com>
Date:   Mon Aug 5 00:51:58 2024 -0700

    add values for neftune (#32399)

    I always forget what typical values are, and I have to look at the paper everytime. This will be a helpful reminder.

commit 3d7c2f9dea45338b7ebcd459b452e2fad7abfa1f
Author: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
Date:   Mon Aug 5 09:22:48 2024 +0200

    * save total_vocab_size = vocab_size + user added tokens to speed up operation

    * updating length when added_tokens_decoder is set

    * add test len(tokenizer)

commit 3bb646a54f42030e9bafa47cd3f64367691a3bc5
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Mon Aug 5 11:58:42 2024 +0500

    Phi3 tests: fix typing for Python 3.8 (#32388)

    fix phi

commit 05ae3a300d6f3534eeb99a08828a5bae6dd973db
Author: TechInterMezzo <account+github@techintermezzo.de>
Date:   Mon Aug 5 08:40:58 2024 +0200

    fix: SeamlessM4TFeatureExtractor stride remainder (#32088)

    * fix: SeamlessM4TFeatureExtractor stride remainder

    * Added attention mask size test

    * Reran ruff for style correction

commit 847bb856d55e3664150e408448fa59d0705b4d60
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Aug 5 08:38:34 2024 +0200

    Bump keras from 2.8.0 to 2.13.1 in /examples/research_projects/decision_transformer (#32393)

    Bump keras in /examples/research_projects/decision_transformer

    Bumps [keras](https://github.com/keras-team/keras) from 2.8.0 to 2.13.1.
    - [Release notes](https://github.com/keras-team/keras/releases)
    - [Commits](https://github.com/keras-team/keras/compare/v2.8.0...v2.13.1)

    ---
    updated-dependencies:
    - dependency-name: keras
      dependency-type: direct:production
    ...

    Signed-off-by: dependabot[bot] <support@github.com>
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

commit 621fb3c0edddf98f3272f3b197e772af4fa30b6c
Author: Xueshen Liu <liuxs@umich.edu>
Date:   Sat Aug 3 14:07:55 2024 -0400

    MixtralFlashAttention2: put "plus 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. (#31500)

    * Mixtral: remove unnecessary plus 1 when calculating rotary_seq_len, allowing position_ids=None (no auto position_ids generation could be unsafe)

    * fix typo [:-1] to [:, -1]

    * to meet formatting requirement

    * to meet formatting requirement

    * remove white space

    * MixtralFlashAttention2: put "+ 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. Fix format/style issue.

    * propagate to startcoder2, phi3, mixtral and qwen2

    * update qwen2_moe

commit 7c31d05b59a9dce24b8ddc4b2bb8c8cf6bb5fd77
Author: Shaopeng Fu <shaopengfu15@gmail.com>
Date:   Sat Aug 3 19:24:11 2024 +0300

    fix: (issue #32124) Exception raised when running `transformers/examples/flax/language-modeling/t5_tokenizer_model.py`. (#32157)

    fix: Exception raised when running .

commit c1aa0edb48217f416f4bbe6e3a9db1500284513b
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Fri Aug 2 17:32:50 2024 +0800

    [generate] only require an attention mask for mps with torch<2.4 (#32367)

    * up

    * style

    * stopping

commit 083e13b7c47f674b11c74d1b7c7ee7cd1241b406
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Fri Aug 2 09:39:45 2024 +0100

    RoPE: Add numerical tests ✨  (#32380)

    tests! :D

commit 2af199c42b545f6248475ce456dd6c2a351b8522
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Aug 2 09:54:16 2024 +0500

    Update docs (#32368)

    nits

commit 82efc53513a51660e629c7eca8210af1d67df00b
Author: Zach Mueller <muellerzr@gmail.com>
Date:   Thu Aug 1 15:18:43 2024 -0400

    Yell at the user if zero-3 init wasn't performed, but expected to have been done (#32299)

    * Test this zach

    * Test for improper init w/o zero3

    * Move back

    * Apply suggestions from code review

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Get rid of stars in warning

    * Make private

    * Make clear

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 51ab25e2932da15511ced35bcbdfa92d25c4794c
Author: OsamaS99 <62110783+OsamaS99@users.noreply.github.com>
Date:   Thu Aug 1 14:57:42 2024 +0200

    Fixed Hybrid Cache Shape Initialization. (#32163)

    * fixed hybrid cache init, added test

    * Fix Test Typo

    ---------

    Co-authored-by: Aaron Haag <aaron.haag@siemens.com>

commit e3d8285a84f803e962050e2c2283f3362e36bfbc
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Thu Aug 1 13:46:11 2024 +0100

    Docker: add `speech` dep to the consistency docker image (#32374)

commit ca59d6f77c9fda197222f9aa9205d8c7b5dff34e
Author: Nikos Karampatziakis <just.nikos@gmail.com>
Date:   Thu Aug 1 05:42:07 2024 -0700

    Offloaded KV Cache (#31325)

    * Initial implementation of OffloadedCache

    * enable usage via cache_implementation

    * Address feedback, add tests, remove legacy methods.

    * Remove flash-attn, discover synchronization bugs, fix bugs

    * Prevent usage in CPU only mode

    * Add a section about offloaded KV cache to the docs

    * Fix typos in docs

    * Clarifications and better explanation of streams

commit b4727a1216bb21df2795e973063ed07202235d7e
Author: Omar Salman <omar.salman@arbisoft.com>
Date:   Thu Aug 1 17:32:13 2024 +0500

    Fix conflicting key in init kwargs in PreTrainedTokenizerBase (#31233)

    * Fix conflicting key in init kwargs in PreTrainedTokenizerBase

    * Update code to check for callable key in save_pretrained

    * Apply PR suggestions

    * Invoke CI

    * Updates based on PR suggestion

commit db8c7caeb6b3969a2153b36ba3e5fdef6534c1d6
Author: Viktor Scherbakov <viktoroo.sch@gmail.com>
Date:   Thu Aug 1 14:30:10 2024 +0200

    Empty list in defaults for LLaMA special tokens during weights conversion (#32342)

    empty list in defaults

commit 2229ebe7220fb54bc5f91f575c2d7a988e7122cb
Author: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
Date:   Thu Aug 1 13:57:41 2024 +0200

    update clean_up_tokenization_spaces warning (#32371)

commit 05c1f9af9a5ebd213dd923e97f6fbed4c115f3c6
Author: Hanna Yukhymenko <49597980+ayukh@users.noreply.github.com>
Date:   Thu Aug 1 13:52:05 2024 +0200

    Check device map for saving tokenizer config on TPU (fix for issue #31971) (#32043)

    * Remove TPU device map for saving tokenizer config

    * Update tokenization_utils_base.py

    * Fix error msg when passing non-string device into tokenizer

    * Fix error message for non-string tokenizer device

    * Print out tokenizer device type in error msg

    * Update tokenization_utils_base.py

commit 9e2828403218da16d9759c9be020b70f51df373d
Author: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Date:   Thu Aug 1 19:51:20 2024 +0800

    add missing attribute _supports_param_buffer_assignment for gpt-j. (#32359)

    Co-authored-by: Guoming Zhang <37257613+nv-guomingz@users.noreply.github.com>

commit 48ed24c50ab29bf690f2ab030721e6a8b0aa5205
Author: Lunwen He <lwhecser@gmail.com>
Date:   Thu Aug 1 04:49:00 2024 -0700

    Remove size check between attn_weights and kv_seq_len for phi3 (#32339)

    * Remove size check between attn_weights and kv_seq_len

    * add unit tests

commit e234061cddd28bb8b82144833241883816289e40
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Thu Aug 1 18:10:56 2024 +0800

    [whisper] compile compatibility with long-form decoding (#31772)

    * [whisper] compile compatibility with long-form decoding

    * clarify comment

    * fix after rebase

    * finalise

    * fix bsz

    * fix cache split

    * remove contiguous

    * style

    * finish

    * update doc

    * prevent cuda graph trace

commit 9451a385261b30e7319a2c93285ab76161e8c003
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Thu Aug 1 16:05:27 2024 +0800

    [enc-dec cache] fix bug in indexing (#32370)

commit 453e74884fb7e2613e7b45033fbb3c1cadb638b4
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Thu Aug 1 09:48:03 2024 +0500

    LLaVa: add cache class attribute (#32278)

    cache class flag

commit 14ee2326e51cb210cec72f31b248cb722e9d5d1f
Author: Ricardo <ricardolcao@gmail.com>
Date:   Thu Aug 1 06:34:22 2024 +0800

    fix: warmup_steps check for training_args (#32236)

commit 53f0c9c2906e0b0f1623bfdfb420fca1e655098d
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Thu Aug 1 01:26:50 2024 +0530

    fix: Removed unnecessary `@staticmethod` decorator (#32361)

    * Fixed staticmethods with self as first argument.

    * Fixed staticmethods with self as first argument.

    * Fixed staticmethods with self as first argument.

    * Fixed staticmethods with self as first argument.

commit 92abe6033491dcaa958235e551f40f6b417d3771
Author: fxmarty <9808326+fxmarty@users.noreply.github.com>
Date:   Wed Jul 31 20:03:07 2024 +0200

    >3-5x faster torch.compile forward compilation for autoregressive decoder models (#32227)

    * draft

    * apply changes to all relevant archs

    * rerun ci - check_docstrings.py failing?

    * fix docstring

    * move 2D->4D mask creation to modeling file

    * repo consistency

    * fix the batch size = 1 case - calling contiguous is not enough

    * nit

    * style

    * propagate to gemma/gemma-2

    * prepare inputs for gemma generation

    * implement test and tiny fix in gemma2

    * Update src/transformers/models/bloom/modeling_bloom.py

    Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

    * fix copies

    * ci pass

    * fix gemma's test_compile_static_cache tests

    * flacky

    * retrigger ci

    ---------

    Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
    Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

commit b46bd8b9d2ac991c0c04674957ebc0a65fb3f42b
Author: Aymeric Roucher <69208727+aymeric-roucher@users.noreply.github.com>
Date:   Wed Jul 31 18:44:53 2024 +0200

    Fix error when streaming to gradio with non-string tool arguments (#32360)

    Fix error when streaming agent run to gradio with non-string tool arguments

commit ef177a5e1cdf0ca53e24e6d76e813198f7300dc4
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Wed Jul 31 16:04:48 2024 +0100

    Gemma 2: support assisted generation (#32357)

commit 5f1fcc299cb00c1edce5eb1efb8bacdde2365690
Author: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Date:   Wed Jul 31 14:51:04 2024 +0100

    [Idefics2] - Fix FA2 call for Perceiver layer (#32275)

    * Fix FA2 call for Perciever layer

    * [run_slow] idefics2

    * [run_slow] idefics2

    * [run_slow] idefics2

    * Fix up

    * [run_slow] idefics2

    * [run_slow] idefics2

    * [run_slow] idefics2

commit b75ad56620431984a44a962c98136c8571b4fca9
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Wed Jul 31 11:12:46 2024 +0100

    Llama 3.1: Fix incorrect `inv_freq` assignment (#32330)

    fix 💩

commit 7f552e28e0aca00ce60868c7620f7463eab60e14
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Wed Jul 31 10:33:38 2024 +0500

    Gemma2 and flash-attention (#32188)

    * enable flash-attn & static cache

    * this works, not the prev

    * fix for sliding window layers

    * not needed anymore

commit a3264332cfb5ab8675ddb42740a75aeee1782a74
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Wed Jul 31 10:01:12 2024 +0500

    LLaVA-NeXT: fix anyres shapes (#32314)

    fix

commit 6e2d04e429dc4ce240c99bd14b7b84550b79fd73
Author: Joshua Lochner <admin@xenova.com>
Date:   Tue Jul 30 23:36:38 2024 +0200

    Fix slow GemmaTokenizer and improve SPM slow -> fast conversion process (#32191)

    * Remove user-defined tokens which can be obtained through merges

    * Remove debug line

    * formatting

    * Refactor spm slow -> fast converter

    * revert unnecessary refactor

    * set comprehension

    * remove test files

    * Use `vocab_scores`

    * Always replace spiece underline with space in decode

    * we no longer need token filtering

    * Add save fast load slow unit test

    * Remove tokenizers version check

    * Remove duplicate code

    * Make `<start_of_turn>` and `<end_of_turn>` special tokens

    * Bias merge priority with length if score is the same

    * Add unit test for merge priority

    * CI

commit 026a173a64372e9602a16523b8fae9de4b0ff428
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Tue Jul 30 18:56:10 2024 +0100

    Repo checks: skip docstring checks if not in the diff (#32328)

    * tmp

    * skip files not in the diff

    * use git.Repo instead of an external subprocess

    * add tiny change to confirm that the diff is working on pushed changes

    * add make quality task

    * more profesh main commit reference

commit 516af4bb63538edc448f814e3690dd5171c4f311
Author: fkrasnov2 <krasnov.fedor2@wb.ru>
Date:   Tue Jul 30 20:21:45 2024 +0300

    fixes #32329 : The Torch code is correct - to get an average of 10% o… (#32335)

    fixes #32329 : The Torch code is correct - to get an average of 10% of the total, we want to take 50% of the remainder after we've already masked 80% with [MASK] in the previous step.

commit 62c60a30181a65e1a3a7f19c3055a240a6a21335
Author: Wing Lian <wing.lian@gmail.com>
Date:   Tue Jul 30 12:55:59 2024 -0400

    fixes to properly shard FSDP across cpu and meta for cpu_efficient_loading for prequantized 4bit (#32276)

commit 16271080333ad52be5349fb31d789fb232b68760
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Tue Jul 30 22:23:03 2024 +0530

    fix: Added missing raise keyword for few exceptions (#32333)

    Fixed raising of few exceptions.

commit bd54ed2ed7f578e4122f3e6d536fbe3c9bc76de1
Author: plaggy <35706832+plaggy@users.noreply.github.com>
Date:   Tue Jul 30 18:48:18 2024 +0200

    Alternative agent plan (#32295)

    * new agent plan

    * plan type assertion

    * style corrections

    * better prompt naming

    * make fixup

commit e68ec18ce224af879f22d904c7505a765fb77de3
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Tue Jul 30 15:49:14 2024 +0100

    Docs: formatting nits (#32247)

    * doc formatting nits

    * ignore non-autodocs

    * Apply suggestions from code review

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/esm/modeling_esm.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/esm/modeling_esm.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * make fixup

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 2fbbcf5007509c66b02924ce6dcff66f58e7f58c
Author: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
Date:   Tue Jul 30 16:00:13 2024 +0200

    Fix M4T for ASR pipeline (#32296)

    * tentative fix

    * do the same for M4T

commit 084b5094eb490319719cc11cb05b751e0b419d49
Author: Luc Georges <McPatate@users.noreply.github.com>
Date:   Tue Jul 30 14:49:26 2024 +0200

    feat(ci): set `fetch-depth: 0` in trufflehog checkout step (#31663)

commit 20528f067cf9204cea5178ce0f837245e146e159
Author: Teddy Ferdinan <64476430+teddy-f-47@users.noreply.github.com>
Date:   Tue Jul 30 11:25:54 2024 +0200

    Cast epochs_trained to int when resuming training (#32286)

    * fix epochs_trained as int when resuming training

    * refactor

    ---------

    Co-authored-by: teddyferdinan <teddy.ferdinan@pwr.edu.pl>

commit 934fe1504e6d5e87e01d96305f4d97faa63cf4c1
Author: Isotr0py <2037008807@qq.com>
Date:   Tue Jul 30 17:01:00 2024 +0800

    Fix GGUF dequantize for `gguf==0.9.1` (#32298)

    * fix gguf dequantize for gguf==0.9.1

    * fix old version

    * make style

commit 3e8106d2533cbd890ddd1e919bd62132cd4718c3
Author: Gilad Turok <36947659+gil2rok@users.noreply.github.com>
Date:   Tue Jul 30 03:19:24 2024 -0400

    Docs: fix GaLore optimizer code example (#32249)

    Docs: fix GaLore optimizer example

    Fix incorrect usage of GaLore optimizer in Transformers trainer code example.

    The GaLore optimizer uses low-rank gradient updates to reduce memory usage. GaLore is quite popular and is implemented by the authors in [https://github.com/jiaweizzhao/GaLore](https://github.com/jiaweizzhao/GaLore). A few months ago GaLore was added to the HuggingFace Transformers library in https://github.com/huggingface/transformers/pull/29588.

    Documentation of the Trainer module includes a few code examples of how to use GaLore. However, the `optim_targe_modules` argument to the `TrainingArguments` function is incorrect, as discussed in https://github.com/huggingface/transformers/pull/29588#issuecomment-2006289512. This pull request fixes this issue.

commit f0bc49e7f61f74f055c47ad40e6010f57eed0b0b
Author: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Date:   Mon Jul 29 22:12:21 2024 +0200

    use torch 2.4 in 2 CI jobs (#32302)

    Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

commit a24a9a66f446dcb9277e31d16255536c5ce27aa6
Author: Aymeric Roucher <69208727+aymeric-roucher@users.noreply.github.com>
Date:   Mon Jul 29 20:12:44 2024 +0200

    Add stream messages from agent run for gradio chatbot (#32142)

    * Add stream_to_gradio method for running agent in gradio demo

commit 811a9caa2141bc98f96b36c69abcf1f934bd1fd2
Author: Guang Yang <42389959+guangy10@users.noreply.github.com>
Date:   Mon Jul 29 10:19:15 2024 -0700

    Make static cache compatible with torch.export (#32168)

commit 7f5d644e69068825bb5b6e84cdc56b3d3a9bd04f
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Mon Jul 29 21:24:42 2024 +0800

    [pipeline] fix padding for 1-d tensors (#31776)

    * [pipeline] fix padding for 1-d tensors

    * add test

    * make style

    * Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

    Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

    * Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

    ---------

    Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

commit 3fbaaaa64d1ef3d8327adb577994d3d11277c77a
Author: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>
Date:   Mon Jul 29 11:19:52 2024 +0100

    Whisper tokenizer word level timestamps (#32197)

    * fix _fix_key in PreTrainedModel

    * fix _find_longest_common_sequence

    * add test

    * remove result.json

    * nit

    * update test

commit 7ffe25f2b935dcaf65079b04c5f91c8a42a99e28
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Mon Jul 29 10:52:13 2024 +0100

    Generate: end-to-end compilation (#30788)

    * mvp

    * added test (a few models need fixes)

    * fix a few test cases

    * test nits

    * harder test 😈

    * revert changes in stablelm

    * test with improved condition

    * add todo

    * tmp commit

    * merged with main

    * nits

    * add todo

    * final corrections

    * add docs for generation compilation

    * docs nits

    * add  tip

    * PR suggestions

    * add more details to the compilation docs

    * fix cache positions

    * cache is now init in generate; update docs

    * tag test as flaky

    * docs

    * post rebase make fixup and other nits

    * remove unintended changes

    * whisper (encoder-decoder) not supported

    * move token default updates to ; add tests for token defaults

    * push changes

    * manual rebase

    * chameleon doesn't support this

    * fix test_static_cache_mha_mqa_gqa (broken in another PR)

    * docs: dynamic is better with end-to-end compilation

commit 49928892d6491ff5a49c12cbc34695f6fa7ac0ed
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Mon Jul 29 15:20:43 2024 +0530

    fix(docs): Fixed a link in docs (#32274)

    Fixed a link in docs.

commit 6494479f1de9fe16e9c6f89e52eb0cf81f864a7c
Author: Fanli Lin <fanli.lin@intel.com>
Date:   Mon Jul 29 17:29:11 2024 +0800

    make `p_mask` a numpy array before passing to `select_starts_ends` (#32076)

    * fix

    * bug fix

    * refine

    * fix

commit 535fe78b9f1d148684723e51f00645351880c47a
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Mon Jul 29 10:06:05 2024 +0100

    Repo: remove exceptions in `check_docstrings` (#32259)

    remove exceptions

commit a2ad9d5ad53f68c1ad268f7f46538eac6f5b631b
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Mon Jul 29 14:13:09 2024 +0530

    fix: Fixed wrong argument passed to `convert_blip_checkpoint` function call (#32262)

    Removed one wrong argument passed to convert_blip_checkpoint function call.

commit 5019aabfacf7599b9a6b4e7a1adc1fb5c9017727
Author: leejet <leejet714@gmail.com>
Date:   Mon Jul 29 15:51:43 2024 +0800

    Optimize t5 tokenize logic to avoid redundant calls (#32270)

    * Optimize t5 tokenize logic to avoid redundant calls

    * fix and overwrite copies

commit f2122cc6eb8e50e4d1b45da54b43bba59a458b30
Author: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Date:   Mon Jul 29 09:42:54 2024 +0200

    Upload new model failure report to Hub (#32264)

    upload

    Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

commit f7396876849926afa87c9412d67c43618dad403d
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Mon Jul 29 10:58:59 2024 +0500

    🚨 Bloom support for cache class (#31445)

    * bloom dynamic cache

    * bloom follows standard cache format

    * no skips for bloom anymore

    * use cache position when possible

    * clean up

    * codestyle

    * Update src/transformers/models/bloom/modeling_bloom.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/bloom/modeling_bloom.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/bloom/modeling_bloom.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * pr comments

    * isinstance fix

    * address comments

    * make musicgen test happy

    * [run-slow] bloom

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 44f6fdd74f84744b159fa919474fd3108311a906
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Sat Jul 27 10:19:46 2024 +0100

    Llama 3.1: replace for loop by tensor ops at inv_freq initialization (#32244)

    * replace for loop by tensor ops

    * rm assert; readability

commit 8da90687308a10b33c5553b8a506cc04aab31702
Author: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Date:   Fri Jul 26 20:52:45 2024 +0200

    More flexible trigger condition (#32251)

    update

    Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

commit 81233c069c166af033794134bd8888783ac49ebe
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 26 14:45:55 2024 +0500

    Flash-Attn: fix generation when no attention mask or no pading (#32241)

    * fix

    * fix prev test (half of failures)

    * [run-slow] llama, gemma2

    * [run-slow] llama, gemma2

commit 27c7f971c0dcd3bb423ea221fe2bce751d313119
Author: Fanli Lin <fanli.lin@intel.com>
Date:   Fri Jul 26 17:41:27 2024 +0800

    [tests] fix `static` cache implementation is not compatible with `attn_implementation==flash_attention_2` (#32039)

    * add flash attention check

    * fix

    * fix

commit 5f841c74b62754f186a8c06a684d491524b7bc03
Author: Connor Anderson <thecatalystak@gmail.com>
Date:   Fri Jul 26 05:05:46 2024 -0400

    Add check for `target_sizes is None` in `post_process_image_guided_detection` for owlv2 (#31934)

    * Add check for target_sizes is None in post_process_image_guided_detection

    * Make sure Owlvit and Owlv2 in sync

    * Fix incorrect indentation; add check for correct size of target_sizes

commit f9756d9edb23354e3df50f7eb3f6b3129a25e453
Author: Rohit Dwivedula <25080952+rohitdwivedula@users.noreply.github.com>
Date:   Fri Jul 26 04:05:38 2024 -0500

    Adds: extra_repr for RMSNorm layers in most models (#32204)

    * adds: extra_repr() to RMSNorm layers in multiple models

    * adds: extra_repr for deprecated models as well

    * formatting as per style guide

commit b8e5cd5396f7c0cc2d5e10be6696ea38742abf51
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Fri Jul 26 14:03:02 2024 +0530

    Refactor: Removed un-necessary `object` base class (#32230)

    * Refactored to remove un-necessary object base class.

    * small fix.

commit 1c7ebf1d6eaf0ed0fd4101fd6eb7e64601429cfe
Author: João Nadkarni <38245862+joaonadkarni@users.noreply.github.com>
Date:   Fri Jul 26 09:38:59 2024 +0200

    don't log base model architecture in wandb if log model is false (#32143)

    * don't log base model architecture in wandb is log model is false

    * Update src/transformers/integrations/integration_utils.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * convert log model setting into an enum

    * fix formatting

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit c46edfb8230bcc3152e8338742dc4822289acb3d
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 26 10:52:06 2024 +0500

    Resize embeds with DeepSpeed  (#32214)

    * fix resize when deepspeed

    * deepsped uses new embeds

    * we needed this

commit fad15fba78e4603cd20695757ad899a6687485f9
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 26 10:17:27 2024 +0500

    Llava: generate without images (#32183)

    * llava w/o images

    * tests

commit 4ab33c2d81866d4dd2f29df07f1a35491acbb39b
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 26 10:16:06 2024 +0500

    Generation: stop at `eos` for assisted decoding (#31301)

    * fix

    * move changes to prompt lookup

    * add test

    * set eos in assistant model

    * style

    * fix flakiness

    * changes for new `main`

    * Update tests/generation/test_utils.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update tests/generation/test_utils.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * add comment to explain

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 9d6c0641c4a3c2c5ecf4d49d7609edd5b745d9bc
Author: Pavel Iakubovskii <qubvel@gmail.com>
Date:   Thu Jul 25 19:20:47 2024 +0100

    Fix code snippet for Grounding DINO (#32229)

    Fix code snippet for grounding-dino

commit 3a83ec48a63a8298c8193be48cf00785674bfb70
Author: jrhe <4038905+jrhe@users.noreply.github.com>
Date:   Thu Jul 25 17:16:13 2024 +0100

    Allow a specific microphone to be used by the ffmpeg audio pipeline utility functions. Default to using the currently active microphone on Mac (#31846)

    * use currently active microphone on mac for ffmpeg_microphone

    * Allow ffmpeg_microphone device to be specified

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    ---------

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 6ed0bf1e8543a7d8e6640bbf9a655c5e1401f7de
Author: Huazhong Ji <hzji210@gmail.com>
Date:   Fri Jul 26 00:01:06 2024 +0800

    translate philosophy.md to chinese (#32177)

    * translate philosophy.md to chinese

    * add the missing link

commit df6eee9201e4ba2b80cea021a18e95ada26ca2cc
Author: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Date:   Thu Jul 25 16:12:23 2024 +0200

    Follow up for #31973 (#32025)

    * fix

    * [test_all] trigger full CI

    ---------

    Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

commit de2318894e4f971ea2273c653a702dc93db2bd6a
Author: Kashif Rasul <kashif.rasul@gmail.com>
Date:   Thu Jul 25 15:12:23 2024 +0200

    [warnings] fix E721 warnings (#32223)

    fix E721 warnings

commit 9b9a54e61bf8749588178b37c23d77b90679fd10
Author: Kashif Rasul <kashif.rasul@gmail.com>
Date:   Thu Jul 25 15:11:43 2024 +0200

    [BigBird Pegasus] set _supports_param_buffer_assignment to False (#32222)

    set _supports_param_buffer_assignment to False

commit 1ecedf1d9ee927bac5b5bae8cb1892d936a5b622
Author: Austin <31086824+avlewis@users.noreply.github.com>
Date:   Thu Jul 25 07:20:27 2024 -0500

    Update question_answering.py (#32208)

commit f53a5dec7b03eb195dc89c82ae761b033db1ceb6
Author: Huazhong Ji <hzji210@gmail.com>
Date:   Thu Jul 25 17:04:04 2024 +0800

    remove unnecessary guard code related with pytorch versions 1.4.2 ~ 1.7.0 (#32210)

    remove unnecessary guard code related with pytorch versions 1.4.2 ~
    1.7.0

commit 5658e749adbaaf883caec003cecae8ce0a4261a6
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Thu Jul 25 16:58:02 2024 +0800

    [whisper] fix short-form output type (#32178)

    * [whisper] fix short-form output type

    * add test

    * make style

    * update long-form tests

    * fixes

    * last fix

    * finalise test

commit 85a1269e19af022e04bc2aad82572cd5a9e8cdd9
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Wed Jul 24 22:30:21 2024 +0530

    fix: Replaced deprecated `unittest method` with the correct one (#32198)

    Replaced deprecated unittest method with the correct one.

commit edd68f4ed8db241bd3e9dc6c4ed96d471f243c9a
Author: Matt <Rocketknight1@users.noreply.github.com>
Date:   Wed Jul 24 17:36:32 2024 +0100

    :rotating_light: No more default chat templates (#31733)

    * No more default chat templates

    * Add the template to the GPT-SW3 tests since it's not available by default now

    * Fix GPT2 test

    * Fix Bloom test

    * Fix Bloom test

    * Remove default templates again

commit 1c122a46dc3c4448901f8d2f3018d9d58b846ba5
Author: Penut Chen <94501378+PenutChen@users.noreply.github.com>
Date:   Wed Jul 24 23:59:59 2024 +0800

    Support dequantizing GGUF FP16 format (#31783)

    * support gguf fp16

    * support gguf bf16 with pytorch

    * add gguf f16 test

    * remove bf16

commit af0e4b7b37b2d7eefe7531cf5201a5d6bae85525
Author: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Date:   Wed Jul 24 17:14:05 2024 +0200

    Fix float8_e4m3fn in modeling_utils (#32193)

    * Fix float8_e4m3fn in modeling_utils

    * style

    * fix

    * comment

commit 1392a6867f40a55dfabaf306745c67627598b1af
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Wed Jul 24 19:26:20 2024 +0500

    Fix resize embedding with Deepspeed (#32192)

    fix resize when deepspeed

commit 8d2534c4d0ab94a97a72d2ce6bb9ccd201abadb3
Author: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Date:   Wed Jul 24 16:06:39 2024 +0200

    let's not warn when someone is running a forward  (#32176)

    * let's not warn when someone is running a foward without cache + self.training

    * more models

    * fixup

commit e0182f3bd7f4753c1e378e052ceea67898d97359
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Wed Jul 24 15:00:48 2024 +0100

    RoPE: relaxed rope validation (#32182)

    * relaxed rope check

    * lets also accept rope_type=None, defaulting to the original implementation

    * type and rope_type can coexist

commit 165116bc145dcc186fa287e624b28a9ab3a79955
Author: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Date:   Wed Jul 24 14:03:40 2024 +0100

    Remove conversational pipeline tests (#32099)

    Remove conversation pipeline tests

commit 5f4ee98a7ade33e1c54fdd6181d04ee7b426b392
Author: Dr. Artificial曾小健 <875100501@qq.com>
Date:   Wed Jul 24 18:54:41 2024 +0800

    Update qwen2.md (#32108)

    * Update qwen2.md

    outdated description

    * Update qwen2.md

    amended

    * Update qwen2.md

    Update

    * Update qwen2.md

    fix wrong version code, now good to go

commit 8678879f1dc2578cec18232146bf19de97aecaa1
Author: 조준래 <junrae6454@naver.com>
Date:   Wed Jul 24 19:38:49 2024 +0900

    fix: default value reflects the runtime environment variables rather than the ones present at import time. (#32153)

    * fix: default value reflects the runtime environment variables rather than the ones present at import time.

    * Fix: Change `deterministic` to None by default; use env var if None

commit 01be5b48790f113b7d71943b580c842e3e097988
Author: Rohit Dwivedula <25080952+rohitdwivedula@users.noreply.github.com>
Date:   Wed Jul 24 02:09:59 2024 -0500

    adds: extra_repr() to MambaRMSNorm to include hidden size / size of weights in the layer (#32171)

    * adds: extra_repr() to MambaRMSNorm to include the hidden size of the layer

    * style fix with ruff:

commit c85510f958e6955d88ea1bafb4f320074bfbd0c1
Author: Fanli Lin <fanli.lin@intel.com>
Date:   Wed Jul 24 00:47:51 2024 +0800

    [docs] change temperature to a positive value (#32077)

    fix

commit bc2adb0112b6677b0dfb4105c74570a0f92183eb
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Tue Jul 23 21:22:41 2024 +0530

    fix: Fixed an if condition that is always evaluating to true (#32160)

    Fixed an if condition always evaluating to true.

commit 23f6a43f82fb2980f4b30cf3f95eb3a940384895
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Tue Jul 23 16:48:16 2024 +0100

    fix (#32162)

commit d5a99dfcee6e94065cb7c83cc8ab6fc5daa0cc4e
Author: Lysandre <lysandre.debut@reseau.eseo.fr>
Date:   Tue Jul 23 16:58:17 2024 +0200

    Llama 3.1 conversion

    Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>

commit ff0d708fe627d6715f9a3e97d0a7947f70437447
Author: Lysandre <lysandre@huggingface.co>
Date:   Tue Jul 23 17:12:47 2024 +0200

    Dev version: v4.44.0.dev0

commit d2c687b3f1859b5c61258af14abba5312c0e6201
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Tue Jul 23 20:37:31 2024 +0530

    Updated `ruff` to the latest version (#31926)

    * Updated ruff version and fixed the required code accorindg to the latest version.

    * Updated ruff version and fixed the required code accorindg to the latest version.

    * Added noqa directive to ignore 1 error shown by ruff

commit 9cf4f2aa9a9cecbb22e813931ef3bb72fc773540
Author: RhuiDih <166782544+RhuiDih@users.noreply.github.com>
Date:   Tue Jul 23 21:56:41 2024 +0800

    Enhancing SFT Training Efficiency Using Packing and FlashAttention2 with Position IDs (#31629)

    * add DataCollatorBatchFlattening

    * Update data_collator.py

    * change name

    * new FA2 flow if position_ids is provided

    * add comments

    * minor fix

    * minor fix data collator

    * add test cases for models

    * add test case for data collator

    * remove extra code

    * formating for ruff check and check_repo.py

    * ruff format

    ruff format tests src utils

    * custom_init_isort.py

commit 7d92009af647167bae338e9d4af8bc0452c62fbf
Author: Deep Gandhi <97520292+DeF0017@users.noreply.github.com>
Date:   Tue Jul 23 19:11:52 2024 +0530

    Added additional kwarg for successful running of optuna hyperparameter search (#31924)

    Update integration_utils.py

    Added additional kwarg

commit 63700628adb91600c84fe3bbbc4c667cd3e3aa71
Author: Alvaro Moran <6949769+tengomucho@users.noreply.github.com>
Date:   Tue Jul 23 14:18:19 2024 +0200

    feat(cache): StaticCache uses index_copy_ to avoid useless copy (#31857)

    * feat(cache): StaticCache uses index_copy_ to avoid useless copy

    Using index_copy_ allows for explicit in-place change of the tensor.
    Some backends (XLA) will otherwise copy the tensor, making the code
    slower and using more memory.

    Proposed implementation will end up using less memory and on XLA will
    result in less compilation, but the change is also quite generic, making
    no change whatsoever on CUDA or CPU backend.

    * feat(cache): SlidingWindowCache uses index_copy_ to avoid useless copy

    Applying the same change done in StaticCache.

    * fix(cache): fallback of index_copy_ when not implemented

    * fix(cache): in index_copy_ ensure tensors are on same device

    * [run slow] llama

    * fix(cache): add move of cache_position to same device in SlidingWindowCache

    * Revert "[run slow] llama"

    This reverts commit 02608dd14253ccd464e31c108e0cd94364f0e8b9.

commit a009fbdab32a4b068c24052a4dfe7a7bc0fc89f9
Author: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Date:   Tue Jul 23 12:23:34 2024 +0100

    Fix typing to be compatible with later py versions (#32155)

commit 3263b3435473cbb5dc66925bc29c1d32b5b8d431
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Tue Jul 23 18:34:30 2024 +0800

    Revert "Incorrect Whisper long-form decoding timestamps " (#32148)

    Revert "Incorrect Whisper long-form decoding timestamps  (#32003)"

    This reverts commit cd48553fc8375e1a28d4d82cfe231dedf6a23af8.

commit 034b47784765e37ecc20f7ad43640f1a2c0094fd
Author: Amit Garg <gargamit@microsoft.com>
Date:   Tue Jul 23 03:33:22 2024 -0700

    Rename Phi-3 rope scaling type (#31436)

    * renamed phi3 rope_scaling type

    * fixed trailing whitespaces

    * fixed test

    * added warning

    * fixed format

commit bab32d6fe932a3372fbd6d5a84e3cacb12a61ae0
Author: Alexandre TL <alextorresleguet@icloud.com>
Date:   Tue Jul 23 12:32:19 2024 +0200

    Added mamba.py backend (#30139)

    * Update README.md

    * tests: forward ok

    * backward test done

    * done testing

    * removed check. scripts

    * Update README.md

    * added use_mambapy arg

    * fixed typo in warning

    * protected imports w/ mambapy package

    * delete pscan.py + raise rather than assert

    * Update import_utils.py

    * fix whitespaces and unused import

    * trailing whitespace + import block unformatted

    * Update modeling_mamba.py

    * transpose before pscan

    * shape comment

    * ran make style

    * use_mambapy=False by default

    Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

    * ran make fix-copies

    ---------

    Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

commit 9ced33ca7f909d9ace743dac083daba99c904d46
Author: Merve Noyan <merveenoyan@gmail.com>
Date:   Tue Jul 23 13:23:23 2024 +0300

    Fix video batching to videollava (#32139)

    ---------

    Co-authored-by: Merve Noyan <mervenoyan@Merve-MacBook-Pro.local>

commit a5b226ce9811aa6b31af0bc9c09c54493a4e67c1
Author: Cyril Vallez <cyril.vallez@gmail.com>
Date:   Tue Jul 23 12:21:23 2024 +0200

    Fix flash attention speed issue (#32028)

    Add the lru_cache for speed

commit a1844a3209eb7e75582684809203bc189931a90c
Author: Ita Zaporozhets <31893021+itazap@users.noreply.github.com>
Date:   Tue Jul 23 11:45:54 2024 +0200

    gguf conversion add_prefix_space=None for llama3 (#31937)

    * gguf conversion forces add_prefix_space=False for llama3, this is not required and forces from_slow, which fails. changing to None + test

    * typo

    * clean test

commit 2e113422b3504fe6de821bb9911b24273b11aa9c
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Tue Jul 23 10:42:55 2024 +0100

    Llama: RoPE refactor (#32135)

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
    Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

commit 5a4a76edb7ac6bbc764392e89adc11adda91f3e5
Author: bayllama <142558246+bayllama@users.noreply.github.com>
Date:   Tue Jul 23 02:28:44 2024 -0700

    Modify resize_token_embeddings to ensure output type is same as input (#31979)

    * Change resize_token_embeddings to make it return same Class that is passed to it

    * Add explanatory comment as requested in review

    * Add explanatory comments for add resizing function in lxmert

    * Add comment for padding_idx and moving _resize_bias in lxmert to LxmertForPreTraining

    ---------

    Co-authored-by: Prashanth Sateesh <prasatee@Prashanths-MBP.attlocal.net>
    Co-authored-by: Prashanth Sateesh <prasatee@Prashanths-MacBook-Pro.local>

commit 1535a2c93d325e529dc9a1907f99247fdf8a58e7
Author: Daniel Lok <daniel.lok@databricks.com>
Date:   Tue Jul 23 17:26:00 2024 +0800

    Disable quick init for TapasPreTrainedModel (#32149)

    add attribute to model

    Signed-off-by: Daniel Lok <daniel.lok@databricks.com>

commit 34b43211d782c00da6fef778dbfaff69bbf3f115
Author: mig-mfreitas <132093787+mig-mfreitas@users.noreply.github.com>
Date:   Tue Jul 23 10:07:58 2024 +0100

    Add YaRN and Dynamic-YaRN RoPE Scaling Methods (#30910)

    * Add YaRN and Dynamic-YaRN RoPE Scaling Methods

    YaRN (Yet another RoPE extension method) combines the NTK-By-Parts
    Interpolation and Attention Scaling methods, improving upon existing
    RoPE interpolation methods for longer context window sizes.

    Fine-tuned models maintain their original performance across benchmarks
    while enabling efficient extrapolation and transfer learning for
    quicker convergence, especially in compute-limited environments.

    We implement YaRN and Dynamic-YaRN for the following list of models:

     - LLaMA
     - Falcon
     - GPT-NeoX
     - Olmo
     - Persimmon
     - Phi
     - StableLM
     - OpenLLaMA

    New unit tests are added to assert YaRN's correct behavior on both
    short and long sequence inputs.

    For more details, please refer to https://arxiv.org/abs/2309.00071.

    Co-authored-by: Miguel Almeida <miguel.pessanha.almeida@tecnico.ulisboa.pt>

    * Refactor YaRN implementation for LLaMA

    Iterate on YaRN implementation for LLaMA and remove diff from remaining
    models for increased PR modularity.

    This commit includes the following changes:
    - Merge 'yarn_rope_scaling' and 'rope_scaling' dictionaries
    - Remove unnecessary attributes ('extrapolation_factor' and 'finetuned')
      from YaRN classes
    - Inherit 'forward' method in YaRN classes from superclass
    - Rename 'yarn' method to 'compute_yarn_scaling'
    - Extend YaRN tests with further assertions
    - Fix style inconsistencies

    Co-authored-by: Miguel Monte e Freitas <miguelmontefreitas@tecnico.ulisboa.pt>

    * Refactor Tensor Building Logic for YaRN

    - Comply with the the tensor building logic introduced in #30743
    - Add referencing to the optimized Attention Factor equation
    - Remove Dynamic YaRN for a more agile deployment

    Co-authored-by: mig-mfreitas <mig-mfreitas@users.noreply.github.com>

    * remove unwanted file

    ---------

    Co-authored-by: Miguel Almeida <miguel.pessanha.almeida@tecnico.ulisboa.pt>
    Co-authored-by: mig-mfreitas <mig-mfreitas@users.noreply.github.com>
    Co-authored-by: Joao Gante <joao@huggingface.co>

commit 7405c1c77e4637768ea0ad5d27d8a4d8d67bfb19
Author: KonradSzafer <61851539+KonradSzafer@users.noreply.github.com>
Date:   Tue Jul 23 10:56:21 2024 +0200

    Add method to retrieve used chat template (#32032)

    encapsulate chat template logic

commit 605f3245dcca34381c35520c35ba0b701ed80d58
Author: Anton Vlasjuk <73884904+vasqu@users.noreply.github.com>
Date:   Tue Jul 23 10:11:12 2024 +0200

    Fix mask creations of `GPTNeoX` and `GPT2` (#31944)

    * fix mask creation of gpt2 and gpt_neox caused by me

    * forgot the reshape of masks when shape > 2

    * add tests for gpt neox and gpt2

    * nit on a comment

commit 2782aadae2b0b0c313eac3ee70f84f0335577635
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Tue Jul 23 14:55:16 2024 +0800

    [modelling] remove un-necessary transpose for fa2 attention (#31749)

    * [whisper] remove un-necessary transpose for fa2 attention

    * propagate

commit f83c6f1d02fba5e5ced9357b9c9196c76d937af3
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Tue Jul 23 14:54:38 2024 +0800

    Remove `trust_remote_code` when loading Libri Dummy (#31748)

    * [whisper integration] use parquet dataset for testing

    * propagate to others

    * more propagation

    * last one

commit 3aefb4ec7f957f9561a410eabc6f9d57b2f0384f
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Tue Jul 23 10:23:55 2024 +0500

    LLaVaNeXT: pad on right if training (#32134)

    * pad on right if training

    * docs

    * add tests

commit 251a2409c694c29ee28e66c954670c483cf54961
Author: James Thewlis <jamt9000@gmail.com>
Date:   Tue Jul 23 01:12:16 2024 -0400

    Add llama3-llava-next-8b to llava_next conversion script (#31395)

    * Add llama3-llava-next-8b to llava_next conversion script

    Adds support for the lmms-lab/llama3-llava-next-8b model to the
    convert_llava_next_weights_to_hf.py script, along with an example
    prompt generated from the llava_llama_3 conv_template in the LLaVA-NeXT
    repo.

    * Exclude <|begin_of_text|> from prompt example

    This token gets added automatically, so it should not be included in the
    prompt example.

    * Add llava-next-72b and llava-next-110b

    Adds the Qwen-based LLaVA-Next models to the conversion script, along
    with changes to load the models on multiple GPUs for inference.

    * Add llama3 and qwen prompt formats to docs

    * Chat prompt and padding side left for llama3 batched

    * update

    * Update src/transformers/models/llava_next/convert_llava_next_weights_to_hf.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/llava_next/convert_llava_next_weights_to_hf.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * remove code

    * better naming

    ---------

    Co-authored-by: raushan <raushan@huggingface.co>
    Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 96a074fa7e2c04b904f72d9e827398d4c5f90f25
Author: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Date:   Mon Jul 22 20:21:59 2024 +0200

    Add new quant method (#32047)

    * Add new quant method

    * update

    * fix multi-device

    * add test

    * add offload

    * style

    * style

    * add simple example

    * initial doc

    * docstring

    * style again

    * works ?

    * better docs

    * switch to non persistant

    * remove print

    * fix init

    * code review

commit bd9dca3b855b5a20ea11097b89c40f34d775f1c7
Author: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Date:   Mon Jul 22 19:42:47 2024 +0200

    set warning level to info for special tokens have been added (#32138)

    fixes #7002

commit 817a676bd711f9626e13578068b36ef09cf572dc
Author: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Date:   Mon Jul 22 18:29:50 2024 +0100

    Don't default to other weights file when use_safetensors=True (#31874)

    * Don't default to other weights file when use_safetensors=True

    * Add tests

    * Update tests/utils/test_modeling_utils.py

    * Add clarifying comments to tests

    * Update tests/utils/test_modeling_utils.py

    * Update tests/utils/test_modeling_utils.py

commit 74d0eb3fedf353bd670aa85ae8fcf4c85f287b5b
Author: Yoni Gottesman <yonigo10@gmail.com>
Date:   Mon Jul 22 20:24:43 2024 +0300

    Return assistant generated tokens mask in apply_chat_template  (#30650)

    return assistant generated tokens mask in apply_chat_template

commit 7987710696803c74ce1b5e7f9dfa055096a6c00e
Author: Bertrand Thia <56003053+bt2513@users.noreply.github.com>
Date:   Mon Jul 22 13:08:27 2024 -0400

    [RoBERTa] Minor clarifications to model doc (#31949)

    * minor edits and clarifications

    * address comment

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    ---------

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

commit 12b6880c81db7742a29ea425dcb9e63b7dbdc449
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Mon Jul 22 22:16:17 2024 +0530

    fix: Fixed raising `TypeError` instead of `ValueError` for invalid type (#32111)

    * Raised TypeError instead of ValueError for invalid types.

    * Updated formatting using ruff.

    * Retrieved few changes.

    * Retrieved few changes.

    * Updated tests accordingly.

commit d1ec36b94f5ba45fb2423e74074cfedab48cfe73
Author: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Date:   Tue Jul 23 00:27:13 2024 +0900

    Update `ko/_toctree.yml` and remove `custom_tools.md` to reflect latest changes (#31969)

    update `ko/_toctree.yml` and remove `custom_tools.md`

commit 7ba028fccb82cbee792b67d596120da8ae9397c9
Author: Matt <Rocketknight1@users.noreply.github.com>
Date:   Mon Jul 22 16:07:29 2024 +0100

    Fix failing test with race condition (#32140)

    * Fix failing test with race condition

    * make fixup

    * monotonic_ns instead of randint

    * uuid4 instead of monotonic_ns

    * Add a finally cleanup step

commit 5a649ff3ecd70599dd0fea7ee430ba47b51a4556
Author: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Date:   Mon Jul 22 21:18:48 2024 +0800

    [generate] fix eos/pad id check on mps devices (#31695)

    Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

commit f2a1e3ca684df624016285266a0ae519e4483be7
Author: Lucain <lucainp@gmail.com>
Date:   Mon Jul 22 15:14:47 2024 +0200

    Mention model_info.id instead of model_info.modelId (#32106)

commit 0fcfc5ccc968ff5a1a439db04a94f566a0bd1d89
Author: Sai-Suraj-27 <sai.suraj.27.729@gmail.com>
Date:   Mon Jul 22 18:43:39 2024 +0530

    fix: Replaced deprecated `mktemp()` function (#32123)

    Replaced deprecated mktemp function.

commit c38c55f4fbc0163cc02ef4588fe2ec391171a2f0
Author: Joao Gante <joaofranciscocardosogante@gmail.com>
Date:   Mon Jul 22 14:06:49 2024 +0100

    Generate: store special token tensors under a unique variable name (#31980)

    * rename stuff

    * english; this one shouldn't be changed

    * add a _ to the new var names

    * musicgen

    * derp

commit aa8f86a421e23fe41b6333efc11ea4248e098d83
Author: Brian <23239305+b-chu@users.noreply.github.com>
Date:   Mon Jul 22 08:06:22 2024 -0400

    Fix shard order (#32023)

commit b3818805978b411713725a1b7470dc1bda073c29
Author: Aymeric Roucher <69208727+aymeric-roucher@users.noreply.github.com>
Date:   Mon Jul 22 10:49:57 2024 +0200

    Agents planning (#31702)

    * Allow planning for agents

commit 0fdea8607d7e01eb0e38a1ebeb7feee30a22f0cf
Author: Lucain <lucainp@gmail.com>
Date:   Fri Jul 19 20:32:39 2024 +0200

    Fix tests after `huggingface_hub` 0.24 (#32054)

    * adapt tests

    * style

    * comment

commit fe008d6ebea1f5770b740991daeefd9322fa434a
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 19 19:21:45 2024 +0500

    Chameleon: not supported with fast load (#32091)

    fixes

commit 62aa270f2ab3acca2a58cde8f08400ec49330b03
Author: Zach Mueller <muellerzr@gmail.com>
Date:   Fri Jul 19 08:58:53 2024 -0400

    Disable quick init for deepspeed (#32066)

    Disable via deepspeed

commit 89575b567e061fd87bdd655ba188b6c7a922d54a
Author: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>
Date:   Fri Jul 19 13:42:22 2024 +0100

    Support generating with fallback for short form audio in Whisper (#30984)

    * remove is_shortform

    * adapt _retrieve_max_frames_and_seek for short_form

    * return bos token in short and long form

    * add decoder_input_ids to short form audios

    * add eos token for  short form

    * handle short form token_timestamps

    * no need to return scores

    * add is_shortform conditions

    * handle when max_new_tokens is None - short form

    * handle assistant decoding

    * fix

    * handle return_dict_in_generate

    * handle split_by_batch for encoder_attentions attribute

    * handle num_beams>1

    * handle num_return_sequences>1 in generate_with_fallback

    * handle num_return_sequences>1 with return_dict_in_generate=True

    * raise error if max_new_tokens + decoder_inputs_ids > max_target_pos

    * fix

    * apply review suggestions

    * fix

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * fix

    * logits for both short form and long form

    * handle if logits_processor is None

    * test

    * apply review changes to num_return_sequences

    * add _expand_variables_for_generation

    * remove short form commented section

    * update comments

    * uncomment num_beams line in generate_with_fallback

    * update assistant decoding

    * handle return_segment with short form generation

    * up

    * fix output format is_shortform

    * overwrite beam_sample test

    * update _set_return_timestamps

    * apply review suggestions

    * apply review suggestions

    * remove seek_outputs_short_form

    * fix _stack_split_outputs

    * fix stack dim in _stack_split_outputs

    * update tests

    * fix past_key_values + beam tests

    * fix

    * clean _expand_variables_for_generation

    * make style

    * fix slow tests

    * make style

    * max_length condition

    * make style

    * add slow tests for shortform fallback

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * apply review changes

    * Update src/transformers/models/whisper/generation_whisper.py

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

    * up

    * fix slow tests

    * apply review suggestions

    * update test

    * make style

    * small fix

    * fix

    * fix test_new_cache_format

    * fix past_key_values

    * fix

    * make style

    * fix slow tests

    * fix

    ---------

    Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

commit 46835ec6aed62e9a73784f1b6a43030afd601e5e
Author: Merve Noyan <merveenoyan@gmail.com>
Date:   Fri Jul 19 15:40:40 2024 +0300

    Add image-text-to-text task guide (#31777)

    * Add image-text-to-text task page

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

    * Address comments

    * Fix heading

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/tasks/image_text_to_text.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Address comments

    * Update image_text_to_text.md

    ---------

    Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 4bd8f12972c6ad06e264baa39f17ec9dfa9a5cb2
Author: Merve Noyan <merveenoyan@gmail.com>
Date:   Fri Jul 19 14:50:34 2024 +0300

    Fixes to chameleon docs (#32078)

    * Fixes

    * Let's not use auto

commit 566b0f1fbf5feb53a18591ca215a8d1245a790ef
Author: Keith Stevens <keith@collinear.ai>
Date:   Fri Jul 19 03:56:45 2024 -0700

    Fix progress callback deepcopy (#32070)

    * Replacing ProgressCallbacks deepcopy with a shallowcopy

    * Using items instead of entries

    * code cleanup for copy in trainer callback

    * Style fix for ProgressCallback

commit e316c5214fe51de0bf8e824245bfd6225c9925aa
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 19 15:38:01 2024 +0500

    VideoLLaVa: fix chat format in docs (#32083)

    fix chat format

commit 22f888b3fab3d914882b8f44896a5658712f535c
Author: Joshua Lochner <admin@xenova.com>
Date:   Fri Jul 19 11:19:35 2024 +0200

    [mistral] Fix FA2 attention reshape for Mistral Nemo (#32065)

    * [mistral] Fix FA2 attention reshape

    * [run-slow] mistral

commit cd48553fc8375e1a28d4d82cfe231dedf6a23af8
Author: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>
Date:   Fri Jul 19 09:26:38 2024 +0100

    Incorrect Whisper long-form decoding timestamps  (#32003)

    * fix lo form timestamps in decode_batch

    * Update src/transformers/models/whisper/tokenization_whisper.py

    Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

    * Update src/transformers/models/whisper/tokenization_whisper.py

    Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

    * add test

    * make style

    * fix copies

    * Update src/transformers/models/whisper/tokenization_whisper_fast.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/whisper/tokenization_whisper.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/whisper/processing_whisper.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/whisper/tokenization_whisper.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * apply review suggestions

    * fix

    * fix copies

    * fix

    * Update src/transformers/models/whisper/tokenization_whisper_fast.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * fix-copies

    ---------

    Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

commit 56a7745704261919dd8117e3a8aa4fb43fade30e
Author: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Date:   Fri Jul 19 10:20:03 2024 +0200

    [Chameleon, Hiera] Improve docs (#32038)

    * Improve docs

    * Fix docs

    * Fix code snippet

commit b873234cb649a24865021f0d598627ce2b24d34a
Author: Raushan Turganbay <raushan@huggingface.co>
Date:   Fri Jul 19 10:08:56 2024 +0500

    Llava: add default chat templates (#31691)

    * add default chat templates

    * Update src/transformers/models/llava/processing_llava.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/models/llava_next/processing_llava_next.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * more clear docstring and docs

    * Update docs/source/en/model_doc/llava.md

    Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

    * Update docs/source/en/model_doc/llava_next.md

    Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

    * Update docs/source/en/model_doc/vipllava.md

    Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

    * add tests

    * remove default templates (see #31733)

    * load chat template from another file

    * Update docs/source/en/model_doc/llava_next.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * revert some changes in docs

    * forgot vipllava

    * chat template file is not temporary hack

    * warn if loading from processor

    * not that file

    * similarly modify `save_pretrained`

    * Update tests/models/llava_next/test_processor_llava_next.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update tests/models/vipllava/test_processor_vipllava.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/model_doc/vipllava.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/processing_utils.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update src/transformers/processing_utils.py

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/model_doc/vipllava.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/model_doc/llava.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

    * Update docs/source/en/model_doc/llava.md

    Co-authored-by: amyeroberts <22614925+amyeroberts@use…
@olisicky
Copy link

olisicky commented Aug 25, 2024

Hi, thank you for this! I tried to use it without DataCollatorWithFlattening but with data prepared with input_ids and position_ids, but I encounter RuntimeError: CUDA error: an illegal memory access was encountered just after elif position_ids is not None and not (torch.diff(position_ids, dim=-1) >= 0).all() and query_length != 1: was called.

I am using:

  • llama3
  • flash_attn version: 2.5.8
  • deepspeed - ZeRO1
  • transformers 4.44.1

Thank you!

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Aug 26, 2024

@olisicky Hi, could you provide a minimal code to reproduce the error ? that would help greatly

@olisicky
Copy link

@olisicky Hi, could you provide a minimal code to reproduce the error ? that would help greatly

Hi @RhuiDih . I found a mistake in my data as I was preparing a minimal code for you:D. So thank you for encouraging me to do so! As I am preparing data without DataCollatorWithFlattening I kept padding there to context_length even for bs=1 and it resulted in plenty of 0 in position_ids tensor on indexes corresponding to PAD tokens.

However, I have an additional question. You are setting -100 in labels for the first token in each sequence in the batch. I wonder why. I thought that the use of position_ids itself should reduce cross-example attention but here you mentioned that the -100 should prevent the last token of previous example predicting the first token of next example.

If I would have a sequences which start with BOS token and then it will be ignored during training by setting -100 in labels, then I would lost the importance of this first token. Is it necessary? Or maybe shouldn't we use some dummy token there rather then a part of the sequence itself?

Thank you very much!

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Aug 27, 2024

@olisicky
The setting of -100 is due to causal loss implementation, if we were flattening all sequences into batch of 1 without setting -100 accordingly, it would incur unwanted loss.

First token of input_ids is still used to produce logits, hence no importance is lost. What we want to ignore, its the first token of labels due to the mentioned causal loss implementation.
Hope it is clear.

@olisicky
Copy link

olisicky commented Sep 2, 2024

@olisicky The setting of -100 is due to causal loss implementation, if we were flattening all sequences into batch of 1 without setting -100 accordingly, it would incur unwanted loss.

First token of input_ids is still used to produce logits, hence no importance is lost. What we want to ignore, its the first token of labels due to the mentioned causal loss implementation. Hope it is clear.

Sorry for late reply. Yes, thank you. I will try it.

@MahmoudAshraf97
Copy link

Is packing usable in transformers generate method? AFAIK only padding is supported

@ArthurZucker
Copy link
Collaborator

ArthurZucker commented Sep 6, 2024

Packing is supported with the collator introduced. We don't pack when generating tho, PRs are welcome for this! 🤗

@qibao77
Copy link

qibao77 commented Oct 15, 2024

What does this PR do?

Improve throughput as well as train time and memory utilization for instruction tuning by enabling padding-free and attention mask-free attention.

Specifically, this PR adds the capability to utilize position_ids in FlashAttention2 _flash_attention_forward() in the case of packing (attention_mask=None) to models which use position_ids in their respective DecoderLayer implementations.

This PR also adds a new off-the-shelf data collator DataCollatorWithFlattening which packs the examples in a mini batch into one long sequence and return position_ids as well and turns the first token of labels to -100 to prevent the last token of previous example predicting the first token of next example.

This enables the following:

  1. Use of packing for instruction tuning without incorrect cross-example attention
  2. Significant increase in training throughput and reduction in memory utilization

Example Result 1 dataset: OrcaMath subset setup: FSDP with 8 GPUs

Model DataProcess Time Throughput (token/s) Memory (MB)
Llama2-7B Padding 790 1269 22305
Llama2-7B ThisPR 574 1746 20950
Mistral-7B Padding 812 1216 23603
Mistral-7B ThisPR 596 1658 22409
Example Result 2 dataset: FLAN subset setup: FSDP with 8 GPUs

Model DataProcess Time Throughput (token/s) Memory (MB)
Llama2-7B Padding 1526 771 29234
Llama2-7B ThisPR 809 1455 23854
Mistral-7B Padding 742 742 30625
Mistral-7B ThisPR 1408 1408 24549

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Models:

Edit - add more description on data collator

How to use this feature for pretraining?

@RhuiDih
Copy link
Contributor Author

RhuiDih commented Oct 16, 2024

@qibao7 usually one would truncate and pack to max length during pretraining, this PR will not benefit pretraining, unless most of your pretraining data are short

Cemberk added a commit to Cemberk/transformers that referenced this pull request Nov 14, 2024
* Added mamba.py backend (#30139)

* Update README.md

* tests: forward ok

* backward test done

* done testing

* removed check. scripts

* Update README.md

* added use_mambapy arg

* fixed typo in warning

* protected imports w/ mambapy package

* delete pscan.py + raise rather than assert

* Update import_utils.py

* fix whitespaces and unused import

* trailing whitespace + import block unformatted

* Update modeling_mamba.py

* transpose before pscan

* shape comment

* ran make style

* use_mambapy=False by default

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* ran make fix-copies

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Rename Phi-3 rope scaling type (#31436)

* renamed phi3 rope_scaling type

* fixed trailing whitespaces

* fixed test

* added warning

* fixed format

* Revert "Incorrect Whisper long-form decoding timestamps " (#32148)

Revert "Incorrect Whisper long-form decoding timestamps  (#32003)"

This reverts commit cd48553fc8375e1a28d4d82cfe231dedf6a23af8.

* Fix typing to be compatible with later py versions (#32155)

* feat(cache): StaticCache uses index_copy_ to avoid useless copy (#31857)

* feat(cache): StaticCache uses index_copy_ to avoid useless copy

Using index_copy_ allows for explicit in-place change of the tensor.
Some backends (XLA) will otherwise copy the tensor, making the code
slower and using more memory.

Proposed implementation will end up using less memory and on XLA will
result in less compilation, but the change is also quite generic, making
no change whatsoever on CUDA or CPU backend.

* feat(cache): SlidingWindowCache uses index_copy_ to avoid useless copy

Applying the same change done in StaticCache.

* fix(cache): fallback of index_copy_ when not implemented

* fix(cache): in index_copy_ ensure tensors are on same device

* [run slow] llama

* fix(cache): add move of cache_position to same device in SlidingWindowCache

* Revert "[run slow] llama"

This reverts commit 02608dd14253ccd464e31c108e0cd94364f0e8b9.

* Added additional kwarg for successful running of optuna hyperparameter search (#31924)

Update integration_utils.py

Added additional kwarg

* Enhancing SFT Training Efficiency Using Packing and FlashAttention2 with Position IDs (#31629)

* add DataCollatorBatchFlattening

* Update data_collator.py

* change name

* new FA2 flow if position_ids is provided

* add comments

* minor fix

* minor fix data collator

* add test cases for models

* add test case for data collator

* remove extra code

* formating for ruff check and check_repo.py

* ruff format

ruff format tests src utils

* custom_init_isort.py

* Updated `ruff` to the latest version (#31926)

* Updated ruff version and fixed the required code accorindg to the latest version.

* Updated ruff version and fixed the required code accorindg to the latest version.

* Added noqa directive to ignore 1 error shown by ruff

* Dev version: v4.44.0.dev0

* Llama 3.1 conversion

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>

* fix (#32162)

* fix: Fixed an if condition that is always evaluating to true (#32160)

Fixed an if condition always evaluating to true.

* [docs] change temperature to a positive value (#32077)

fix

* adds: extra_repr() to MambaRMSNorm to include hidden size / size of weights in the layer (#32171)

* adds: extra_repr() to MambaRMSNorm to include the hidden size of the layer

* style fix with ruff:

* fix: default value reflects the runtime environment variables rather than the ones present at import time. (#32153)

* fix: default value reflects the runtime environment variables rather than the ones present at import time.

* Fix: Change `deterministic` to None by default; use env var if None

* Update qwen2.md (#32108)

* Update qwen2.md

outdated description

* Update qwen2.md

amended

* Update qwen2.md

Update

* Update qwen2.md

fix wrong version code, now good to go

* Remove conversational pipeline tests (#32099)

Remove conversation pipeline tests

* RoPE: relaxed rope validation (#32182)

* relaxed rope check

* lets also accept rope_type=None, defaulting to the original implementation

* type and rope_type can coexist

* let's not warn when someone is running a forward  (#32176)

* let's not warn when someone is running a foward without cache + self.training

* more models

* fixup

* Fix resize embedding with Deepspeed (#32192)

fix resize when deepspeed

* Fix float8_e4m3fn in modeling_utils (#32193)

* Fix float8_e4m3fn in modeling_utils

* style

* fix

* comment

* Support dequantizing GGUF FP16 format (#31783)

* support gguf fp16

* support gguf bf16 with pytorch

* add gguf f16 test

* remove bf16

* :rotating_light: No more default chat templates (#31733)

* No more default chat templates

* Add the template to the GPT-SW3 tests since it's not available by default now

* Fix GPT2 test

* Fix Bloom test

* Fix Bloom test

* Remove default templates again

* fix: Replaced deprecated `unittest method` with the correct one (#32198)

Replaced deprecated unittest method with the correct one.

* [whisper] fix short-form output type (#32178)

* [whisper] fix short-form output type

* add test

* make style

* update long-form tests

* fixes

* last fix

* finalise test

* remove unnecessary guard code related with pytorch versions 1.4.2 ~ 1.7.0 (#32210)

remove unnecessary guard code related with pytorch versions 1.4.2 ~
1.7.0

* Update question_answering.py (#32208)

* [BigBird Pegasus] set _supports_param_buffer_assignment to False (#32222)

set _supports_param_buffer_assignment to False

* [warnings] fix E721 warnings (#32223)

fix E721 warnings

* Follow up for #31973 (#32025)

* fix

* [test_all] trigger full CI

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* translate philosophy.md to chinese (#32177)

* translate philosophy.md to chinese

* add the missing link

* Allow a specific microphone to be used by the ffmpeg audio pipeline utility functions. Default to using the currently active microphone on Mac (#31846)

* use currently active microphone on mac for ffmpeg_microphone

* Allow ffmpeg_microphone device to be specified

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Fix code snippet for Grounding DINO (#32229)

Fix code snippet for grounding-dino

* Generation: stop at `eos` for assisted decoding (#31301)

* fix

* move changes to prompt lookup

* add test

* set eos in assistant model

* style

* fix flakiness

* changes for new `main`

* Update tests/generation/test_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update tests/generation/test_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* add comment to explain

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Llava: generate without images (#32183)

* llava w/o images

* tests

* Resize embeds with DeepSpeed  (#32214)

* fix resize when deepspeed

* deepsped uses new embeds

* we needed this

* don't log base model architecture in wandb if log model is false (#32143)

* don't log base model architecture in wandb is log model is false

* Update src/transformers/integrations/integration_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* convert log model setting into an enum

* fix formatting

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Refactor: Removed un-necessary `object` base class (#32230)

* Refactored to remove un-necessary object base class.

* small fix.

* Adds: extra_repr for RMSNorm layers in most models (#32204)

* adds: extra_repr() to RMSNorm layers in multiple models

* adds: extra_repr for deprecated models as well

* formatting as per style guide

* Add check for `target_sizes is None` in `post_process_image_guided_detection` for owlv2 (#31934)

* Add check for target_sizes is None in post_process_image_guided_detection

* Make sure Owlvit and Owlv2 in sync

* Fix incorrect indentation; add check for correct size of target_sizes

* [tests] fix `static` cache implementation is not compatible with `attn_implementation==flash_attention_2` (#32039)

* add flash attention check

* fix

* fix

* Flash-Attn: fix generation when no attention mask or no pading (#32241)

* fix

* fix prev test (half of failures)

* [run-slow] llama, gemma2

* [run-slow] llama, gemma2

* More flexible trigger condition (#32251)

update

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Llama 3.1: replace for loop by tensor ops at inv_freq initialization (#32244)

* replace for loop by tensor ops

* rm assert; readability

* 🚨 Bloom support for cache class (#31445)

* bloom dynamic cache

* bloom follows standard cache format

* no skips for bloom anymore

* use cache position when possible

* clean up

* codestyle

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* pr comments

* isinstance fix

* address comments

* make musicgen test happy

* [run-slow] bloom

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Upload new model failure report to Hub (#32264)

upload

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Optimize t5 tokenize logic to avoid redundant calls (#32270)

* Optimize t5 tokenize logic to avoid redundant calls

* fix and overwrite copies

* fix: Fixed wrong argument passed to `convert_blip_checkpoint` function call (#32262)

Removed one wrong argument passed to convert_blip_checkpoint function call.

* Repo: remove exceptions in `check_docstrings` (#32259)

remove exceptions

* make `p_mask` a numpy array before passing to `select_starts_ends` (#32076)

* fix

* bug fix

* refine

* fix

* fix(docs): Fixed a link in docs (#32274)

Fixed a link in docs.

* Generate: end-to-end compilation (#30788)

* mvp

* added test (a few models need fixes)

* fix a few test cases

* test nits

* harder test 😈

* revert changes in stablelm

* test with improved condition

* add todo

* tmp commit

* merged with main

* nits

* add todo

* final corrections

* add docs for generation compilation

* docs nits

* add  tip

* PR suggestions

* add more details to the compilation docs

* fix cache positions

* cache is now init in generate; update docs

* tag test as flaky

* docs

* post rebase make fixup and other nits

* remove unintended changes

* whisper (encoder-decoder) not supported

* move token default updates to ; add tests for token defaults

* push changes

* manual rebase

* chameleon doesn't support this

* fix test_static_cache_mha_mqa_gqa (broken in another PR)

* docs: dynamic is better with end-to-end compilation

* Whisper tokenizer word level timestamps (#32197)

* fix _fix_key in PreTrainedModel

* fix _find_longest_common_sequence

* add test

* remove result.json

* nit

* update test

* [pipeline] fix padding for 1-d tensors (#31776)

* [pipeline] fix padding for 1-d tensors

* add test

* make style

* Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

* Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

---------

Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

* Make static cache compatible with torch.export (#32168)

* Add stream messages from agent run for gradio chatbot (#32142)

* Add stream_to_gradio method for running agent in gradio demo

* use torch 2.4 in 2 CI jobs (#32302)

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Docs: fix GaLore optimizer code example (#32249)

Docs: fix GaLore optimizer example

Fix incorrect usage of GaLore optimizer in Transformers trainer code example.

The GaLore optimizer uses low-rank gradient updates to reduce memory usage. GaLore is quite popular and is implemented by the authors in [https://github.com/jiaweizzhao/GaLore](https://github.com/jiaweizzhao/GaLore). A few months ago GaLore was added to the HuggingFace Transformers library in https://github.com/huggingface/transformers/pull/29588.

Documentation of the Trainer module includes a few code examples of how to use GaLore. However, the `optim_targe_modules` argument to the `TrainingArguments` function is incorrect, as discussed in https://github.com/huggingface/transformers/pull/29588#issuecomment-2006289512. This pull request fixes this issue.

* Fix GGUF dequantize for `gguf==0.9.1` (#32298)

* fix gguf dequantize for gguf==0.9.1

* fix old version

* make style

* Cast epochs_trained to int when resuming training (#32286)

* fix epochs_trained as int when resuming training

* refactor

---------

Co-authored-by: teddyferdinan <teddy.ferdinan@pwr.edu.pl>

* feat(ci): set `fetch-depth: 0` in trufflehog checkout step (#31663)

* Fix M4T for ASR pipeline (#32296)

* tentative fix

* do the same for M4T

* Docs: formatting nits (#32247)

* doc formatting nits

* ignore non-autodocs

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/esm/modeling_esm.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/esm/modeling_esm.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* make fixup

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Alternative agent plan (#32295)

* new agent plan

* plan type assertion

* style corrections

* better prompt naming

* make fixup

* fix: Added missing raise keyword for few exceptions (#32333)

Fixed raising of few exceptions.

* fixes to properly shard FSDP across cpu and meta for cpu_efficient_loading for prequantized 4bit (#32276)

* fixes #32329 : The Torch code is correct - to get an average of 10% o… (#32335)

fixes #32329 : The Torch code is correct - to get an average of 10% of the total, we want to take 50% of the remainder after we've already masked 80% with [MASK] in the previous step.

* Repo checks: skip docstring checks if not in the diff (#32328)

* tmp

* skip files not in the diff

* use git.Repo instead of an external subprocess

* add tiny change to confirm that the diff is working on pushed changes

* add make quality task

* more profesh main commit reference

* Fix slow GemmaTokenizer and improve SPM slow -> fast conversion process (#32191)

* Remove user-defined tokens which can be obtained through merges

* Remove debug line

* formatting

* Refactor spm slow -> fast converter

* revert unnecessary refactor

* set comprehension

* remove test files

* Use `vocab_scores`

* Always replace spiece underline with space in decode

* we no longer need token filtering

* Add save fast load slow unit test

* Remove tokenizers version check

* Remove duplicate code

* Make `<start_of_turn>` and `<end_of_turn>` special tokens

* Bias merge priority with length if score is the same

* Add unit test for merge priority

* CI

* LLaVA-NeXT: fix anyres shapes (#32314)

fix

* Gemma2 and flash-attention (#32188)

* enable flash-attn & static cache

* this works, not the prev

* fix for sliding window layers

* not needed anymore

* Llama 3.1: Fix incorrect `inv_freq` assignment (#32330)

fix 💩

* [Idefics2] - Fix FA2 call for Perceiver layer (#32275)

* Fix FA2 call for Perciever layer

* [run_slow] idefics2

* [run_slow] idefics2

* [run_slow] idefics2

* Fix up

* [run_slow] idefics2

* [run_slow] idefics2

* [run_slow] idefics2

* Gemma 2: support assisted generation (#32357)

* Fix error when streaming to gradio with non-string tool arguments (#32360)

Fix error when streaming agent run to gradio with non-string tool arguments

* >3-5x faster torch.compile forward compilation for autoregressive decoder models (#32227)

* draft

* apply changes to all relevant archs

* rerun ci - check_docstrings.py failing?

* fix docstring

* move 2D->4D mask creation to modeling file

* repo consistency

* fix the batch size = 1 case - calling contiguous is not enough

* nit

* style

* propagate to gemma/gemma-2

* prepare inputs for gemma generation

* implement test and tiny fix in gemma2

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix copies

* ci pass

* fix gemma's test_compile_static_cache tests

* flacky

* retrigger ci

---------

Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix: Removed unnecessary `@staticmethod` decorator (#32361)

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* fix: warmup_steps check for training_args (#32236)

* LLaVa: add cache class attribute (#32278)

cache class flag

* [enc-dec cache] fix bug in indexing (#32370)

* [whisper] compile compatibility with long-form decoding (#31772)

* [whisper] compile compatibility with long-form decoding

* clarify comment

* fix after rebase

* finalise

* fix bsz

* fix cache split

* remove contiguous

* style

* finish

* update doc

* prevent cuda graph trace

* Remove size check between attn_weights and kv_seq_len for phi3 (#32339)

* Remove size check between attn_weights and kv_seq_len

* add unit tests

* add missing attribute _supports_param_buffer_assignment for gpt-j. (#32359)

Co-authored-by: Guoming Zhang <37257613+nv-guomingz@users.noreply.github.com>

* Check device map for saving tokenizer config on TPU (fix for issue #31971) (#32043)

* Remove TPU device map for saving tokenizer config

* Update tokenization_utils_base.py

* Fix error msg when passing non-string device into tokenizer

* Fix error message for non-string tokenizer device

* Print out tokenizer device type in error msg

* Update tokenization_utils_base.py

* update clean_up_tokenization_spaces warning (#32371)

* Empty list in defaults for LLaMA special tokens during weights conversion (#32342)

empty list in defaults

* Fix conflicting key in init kwargs in PreTrainedTokenizerBase (#31233)

* Fix conflicting key in init kwargs in PreTrainedTokenizerBase

* Update code to check for callable key in save_pretrained

* Apply PR suggestions

* Invoke CI

* Updates based on PR suggestion

* Offloaded KV Cache (#31325)

* Initial implementation of OffloadedCache

* enable usage via cache_implementation

* Address feedback, add tests, remove legacy methods.

* Remove flash-attn, discover synchronization bugs, fix bugs

* Prevent usage in CPU only mode

* Add a section about offloaded KV cache to the docs

* Fix typos in docs

* Clarifications and better explanation of streams

* Docker: add `speech` dep to the consistency docker image (#32374)

* Fixed Hybrid Cache Shape Initialization. (#32163)

* fixed hybrid cache init, added test

* Fix Test Typo

---------

Co-authored-by: Aaron Haag <aaron.haag@siemens.com>

* Yell at the user if zero-3 init wasn't performed, but expected to have been done (#32299)

* Test this zach

* Test for improper init w/o zero3

* Move back

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Get rid of stars in warning

* Make private

* Make clear

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs (#32368)

nits

* RoPE: Add numerical tests ✨  (#32380)

tests! :D

* [generate] only require an attention mask for mps with torch<2.4 (#32367)

* up

* style

* stopping

* fix: (issue #32124) Exception raised when running `transformers/examples/flax/language-modeling/t5_tokenizer_model.py`. (#32157)

fix: Exception raised when running .

* MixtralFlashAttention2: put "plus 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. (#31500)

* Mixtral: remove unnecessary plus 1 when calculating rotary_seq_len, allowing position_ids=None (no auto position_ids generation could be unsafe)

* fix typo [:-1] to [:, -1]

* to meet formatting requirement

* to meet formatting requirement

* remove white space

* MixtralFlashAttention2: put "+ 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. Fix format/style issue.

* propagate to startcoder2, phi3, mixtral and qwen2

* update qwen2_moe

* Bump keras from 2.8.0 to 2.13.1 in /examples/research_projects/decision_transformer (#32393)

Bump keras in /examples/research_projects/decision_transformer

Bumps [keras](https://github.com/keras-team/keras) from 2.8.0 to 2.13.1.
- [Release notes](https://github.com/keras-team/keras/releases)
- [Commits](https://github.com/keras-team/keras/compare/v2.8.0...v2.13.1)

---
updated-dependencies:
- dependency-name: keras
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: SeamlessM4TFeatureExtractor stride remainder (#32088)

* fix: SeamlessM4TFeatureExtractor stride remainder

* Added attention mask size test

* Reran ruff for style correction

* Phi3 tests: fix typing for Python 3.8 (#32388)

fix phi

* #32184 save total_vocab_size (#32240)

* save total_vocab_size = vocab_size + user added tokens to speed up operation

* updating length when added_tokens_decoder is set

* add test len(tokenizer)

* add values for neftune (#32399)

I always forget what typical values are, and I have to look at the paper everytime. This will be a helpful reminder.

* Fix documentation references to google/bit-50 model (#32407)

* Persist embedding type of BART and mBART models after resize (#32242)

* fix: persist embedding type of MBartConditonalGeneration after resize

* fix: persist embedding type of BartConditonalGeneration after resize

* fix: Updated `test_embeded_special_tokens` for luke and mluke models (#32413)

Fixed tokenizertests for luke, mluke models.

* Respect the config's attn_implementation if set (#32383)

* Respect the config's attn if set

* Update test - can override in from_config

* Fix

* Fix documentation links and code reference to model llava-next (#32434)

* Cache: create docs (#32150)

* draft

* updates

* works?

* try adding python example in hidden section

* another try

* hwo do i render python

* format as html code?

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* one more small update

* should render hidden secrtion now

* add outputs

* fix links

* check links

* update all links

* update with offloaded cache

* all cache is importable, so they appear in docs

* fix copies

* docstring...

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Llava: fix checkpoint_doc (#32458)

fix: add new llava like model bug

* add the missing flash attention test marker (#32419)

* add flash attention check

* fix

* fix

* add the missing marker

* bug fix

* add one more

* remove order

* add one more

* Update kwargs validation for `preprocess` with decorator (#32024)

* BLIP preprocess

* BIT preprocess

* BRIDGETOWER preprocess

* CHAMELEON preprocess

* CHINESE_CLIP preprocess

* CONVNEXT preprocess

* DEIT preprocess

* DONUT preprocess

* DPT preprocess

* FLAVA preprocess

* EFFICIENTNET preprocess

* FUYU preprocess

* GLPN preprocess

* IMAGEGPT preprocess

* INTRUCTBLIPVIDEO preprocess

* VIVIT preprocess

* ZOEDEPTH preprocess

* VITMATTE preprocess

* VIT preprocess

* VILT preprocess

* VIDEOMAE preprocess

* VIDEOLLAVA

* TVP processing

* TVP fixup

* SWIN2SR preprocess

* SIGLIP preprocess

* SAM preprocess

* RT-DETR preprocess

* PVT preprocess

* POOLFORMER preprocess

* PERCEIVER preprocess

* OWLVIT preprocess

* OWLV2 preprocess

* NOUGAT preprocess

* MOBILEVIT preprocess

* MOBILENETV2 preprocess

* MOBILENETV1 preprocess

* LEVIT preprocess

* LAYOUTLMV2 preprocess

* LAYOUTLMV3 preprocess

* Add test

* Update tests

* Fix get large model config for Switch Transformer encoder only tester (#32438)

* Dependencies: fix typo (#32389)

deps_2

* Add Nemotron HF Support (#31699)

* Add nemotron support

* fix inference

* add unit test

* add layernorm1p as a class to avoid meta device mismatch

* test fixed

* Add copied_from statements

* remove pretraining_tp args

* remove nemotronlayernorm

* force LN computation done in FP32

* remove nemotrontokenizer and use llamatokenizer

* license update

* add option for kv_channels for minitron8b

* remove assert

* o_proj fixed

* o_proj reshape

* add gated_proj option

* typo

* remove todos

* fix broken test after merging latest main

* remove nezha/nat after meging main

* chnage default config to 15b model

* add nemo conversion script

* rename conversion script

* remove gate_proj option

* pr comment resolved

* fix unit test

* rename kv_channels to head_dim

* resolve PR issue

* add nemotron md

* fix broken tests

* refactor rope for nemotron

* test fix

* remove linearscaling

* whitespace and import

* fix some copied-from

* code style fix

* reformatted

* add position_embedding to nemotronattention

* rope refactor to only use config, copied-from fix

* format

* Run make fix-copies

* nemotron md with autodoc

* doc  fix

* fix order

* pass check_config_docstrings.py

* fix config_attributes

* remove all llama BC related code

* Use PreTrainedTokenizerFast

* ruff check examples

* conversion script update

* add nemotron to toctree

* Generate: fix end to end compilation (#32465)

* Add codestral mamba2 (#32080)

* add new model like

* draft cuda forward - mismatched keys (sharding on conv1)

* match keys successfully

* fix split

* get generation/forward running (wrong gens, norm?)

* :update

* some refactoring

* fixes

* works up until copy to cache

* fix

* update

* NON WORKING VERSION

* version that work?

* nit

* fix config

* fix conversion script

* working cuda forward

* nit

* update

* simplifcation

* make mamba slow simple work

* no einops

* todo

* fix style

* no einops

* update fix no einsum

* nit

* remove einops

* bug: scan_output differs strongly

* add rms norm option

* fix fast + slow generation with and w/o cache :heavy_check_mark:

* draft integration tests

* remove a big chunk of the einsum

* fix slow, fast generations, without any einsum

* fix copies

* fix structure

* fix up modeling and tests

* fix tests

* clamping is indeed worse

* recover mamba2 cache test

* fix copies

* no cache position (yet)

* fix tf tests

* fix matmul for generate

* fixup

* skip cache tests for now

* [run-slow]mamba2

* tune out hidden states for padding

* test batched generation

* propagate attention mask changes

* fix past length

* fix integration test

* style

* address comments

* update readme

* add mamba2 version check

* fix tests

* [run-slow]mamba2

* skip edge tests

* [run-slow]mamba2

* last fixup

* [run-slow]mamba2

* update README

---------

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>

* Migrate import checks not need accelerate, and be more clear on min versions (#32292)

* Migrate import checks to secondary accelerate calls

* better errs too

* Revert, just keep the import checks + remove accelerate-specific things

* Rm extra'

* Empty commit for ci

* Small nits

* Final

* Documentation: BOS token_id deprecation change for NLLB (#32443)

Update nllb.md

* dev version 4.45.0

* `is_torchdynamo_compiling` -- cast a wide exception net (#32476)

* cast a wide net

* make fix-copies with a few manual changes

* add copied from

* Revert "fixes to properly shard FSDP across cpu and meta for cpu_effcient_loading for prequantized 4bit (#32276)" (#32477)

* Revert "fixes to properly shard FSDP across cpu and meta for cpu_efficient_loading for prequantized 4bit (#32276)"

This reverts commit 62c60a30181a65e1a3a7f19c3055a240a6a21335.

We uncovered an issue with this change that caused our training runs to hang.

* `is_torchdynamo_compiling` -- cast a wide exception net (#32476)

* cast a wide net

* make fix-copies with a few manual changes

* add copied from

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* 🌐 [i18n-KO] Translated `mask_generation.md` to Korean (#32257)

* docs: ko: tasks/mask_generation.md

* feat: nmt draft

* fix : toc local

* fix : manual edits

* fix : ko-toctree

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions

* fix: resolve suggestions

* fix: resolve suggestions

---------

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* 🌐 [i18n-KO] Translated `idefics.md` to Korean (#32258)

* docs: ko: tasks/idefics.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `image_to_image.md` to Korean (#32327)

* docs: ko: tasks/image_to_image.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>
Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* fix: handle remaining suggestions

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

---------

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>
Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* Cache: new Cache format in decoder-only models (#31421)

* draft bart with new cache

* add cache for decoder-only models

* revert utils

* modify docstring

* revert bart

* minor fixes

* fix copies (not related)

* revert tests

* remove enc-dec related code

* remove bloom

* remove opt (enc-dec)

* update docstring

* git, codegen, gpt_neo, gpt_neox, gpj

* clean up

* copied from statements

* revert

* tmp

* update warning msg

* forgot git

* add more flags

* run-slow git,codegen,gpt_neo,gpt_neox,gpj

* add cache flag to VLMs

* remove files

* style

* video LLMs also need a flag

* style

* llava will go in another PR

* style

* [run-slow] codegen, falcon, git, gpt_neo, gpt_neox, gptj, idefics

* Update src/transformers/models/gpt_neo/modeling_gpt_neo.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* copy from

* deprecate until v4.45 and warn if not training

* nit

* fix test

* test static cache

* add more tests and fix models

* fix copies

* return sliding window mask

* run slow tests & fix + codestyle

* one more falcon fix for alibi

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Gemma2: add cache warning (#32279)

* gemma2 fallback to dynamic cache

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* raise error and dont fallback to dynamic cache

* prev will break most forward calls/tests

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* update

* fix copies

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* enable xla fsdp (#32048)

* enable xla fsdp

* add acceleration version check for xla fsdp

* Fix typo in tokenization_utils_base.py (#32484)

* Agents use grammar (#31735)

* Allow optional use of grammars to constrain generation

* fix broken link in docs (#32491)

`https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TextGenerationPipeline.__call__`

`generate_kwargs (dict, optional) — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).`

link in "here" doesnt work

* Docs: alert for the possibility of manipulating logits (#32467)

* logits

* words

* 🌐 [i18n-KO] Translated `gptq.md` to Korean (#32293)

* fix: manual edits

* fix: manual edits2

* fix: delete files

* fix: resolve suggestions

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `prompting.md` to Korean (#32294)

* docs: ko: tasks/prompting.md

* feat: nmt-draft

* fix: update translation in prompting.md

* fix: update toctree.yml

* fix: manual edits

* fix: toctree edits

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `quantization/quanto.md` to Korean (#32281)

* docs: ko: quantization/quanto.md

* feat: nmt draft

* fix: resolve suggestions

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>

---------

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `image_feature_extraction.md` to Korean (#32239)

* docs: ko: tasks/images_feature_extraction.md

* feat: nmt draft

* fix: manual edits

* fix: manual edits

* fix: manual edits

* fix: manual edits

* feat: manual edits

* Update docs/source/ko/tasks/image_feature_extraction.md

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* Update docs/source/ko/tasks/image_feature_extraction.md

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* fix: manual edits

---------

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* Fix references to model google mt5 small (#32497)

* Docs: Fixed WhisperModel.forward’s docstring link (#32498)

Fixed WhisperModel.forward’s docstring link.

* 🌐 [i18n-KO] Translated `chat_templating.md` to Korean (#32362)

* docs: ko: chat_templating.md

* feat: nmt draft

* fix: manual edits

* Update docs/source/ko/chat_templating.md

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* Update docs/source/ko/chat_templating.md

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* fix: apply suggestions from code review - anchor

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* fix: manual edits

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* fix: manual edits

* fix: delete 'default template' section

---------

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* Fix link to autoclass_tutorial.md in i18n.md (#32501)

* Fix typo: depracted -> deprecated (#32489)

Hello!

## Pull Request overview
* Fix typo

## Details
This should speak for itself.

cc @itazap @ArthurZucker 

- Tom Aarsen

* Fix issue #32518: Update llm_tutorial.md (#32523)

Update llm_tutorial.md

remove comma re: issue 32518

https://github.com/huggingface/transformers/issues/32518

* Change Phi3 `_supports_sdpa` to True (#32457)

* Change `_supports_sdpa` to True

* add phi3 to sdpa support list

* Uniformize kwargs for processors - GroundingDINO (#31964)

* fix typo

* uniform kwargs

* make style

* add comments

* remove return_tensors

* remove common_kwargs from processor since it propagates

* make style

* return_token_type_ids to True

* revert the default imagekwargs since does not accept any value in the image processro

* revert processing_utils.py

* make style

* add molbap's commit

* fix typo

* fix common processor

* remain

* Revert "add molbap's commit"

This reverts commit a476c6ee88318ce40d73ea31e2dc2d4faa8ae410.

* add unsync PR

* revert

* make CI happy

* nit

* import annotationformat

* Fix add-new-model-like (#31773)

* handle (processor_class, None) returned by ModelPatterns

* handle (slow, fast) image processors in add model

* handle old image processor case

* Add Qwen2-Audio (#32137)

* add qwen2audio

* Update check_repo.py

* fix style

* fix test

* fix style

* add model size

* Qwen2AudioEncoderModel->Qwen2AudioEncoder; add copy info

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* switch the attention_mask and the feature_attention_mask

* add to PRIVATE_MODELS in check_repo.py; add to MODEL_NAMES_TO_IGNORE in check_table.py

* fix initialization

* update chat_template

* fix consistency issue after copy

* add docstrings to _merge_input_ids_with_audio_features

* add copied from to prepare_inputs_for_generation

* add more details to docs

* rm comment

* add init_std

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* update

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* update tests

* rm ignore_index

* update processor

* rm ffmpeg_read

* Update tests/models/qwen2_audio/test_modeling_qwen2_audio.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* update

* typo

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* fix quality

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* add official model

---------

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* filter flash_attn optional imports loading remote code (#30954)

* filter flash_attn optional imports loading remote code

* improve pattern

* fix code style

* Update src/transformers/dynamic_module_utils.py

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>

---------

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `ko-llm_tutorial_optimization.md` to Korean (#32372)

* docs: ko: llm_tutorial_optimization.md

* feat: nmt draft

* fix: manual edits

* Update docs/source/ko/llm_tutorial_optimization.md

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* Update docs/source/ko/llm_tutorial_optimization.md

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions - 1

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>
Co-authored-by: boyunJang <gobook1234@naver.com>

* fix: resolve suggestions - 2

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>
Co-authored-by: boyunJang <gobook1234@naver.com>

* 🌐 [i18n-KO] Translated `trainer.md` to Korean (#32260)

* docs: ko: ko-trainer

* feat: nmt draft

* fix: manual edits

* fix: manual edits

* fix: glossary

* fix: glossary

* Apply suggestions from code review

Co-authored-by: Jinuk <45095330+JinukHong@users.noreply.github.com>
Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>

---------

Co-authored-by: Jinuk <45095330+JinukHong@users.noreply.github.com>
Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `eetq.md` to Korean (#32352)

* docs: ko: quantization/eetq.md

* feat: nmt draft

* fix docs: ko: quantization/eetq.md

* fix docs: ko: quantization/eetq.md

* fix: resolve suggestions

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* fix: resolve suggestions

* fix: resolve suggsetions

---------

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `fsdp.md` to Korean (#32261)

* docs: ko: fsdp.md

* feat: nmt draft

* fix: manual edits

* Apply suggestions from code review

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* fix: resolve suggestions

* Update docs/source/ko/fsdp.md

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* Update docs/source/ko/fsdp.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `bitsandbytes.md` to Korean (#32408)

* docs: ko: quantization/bitsandbytes.md

* feat: nmt draft

* fix: minor typos

* fix: manual edits

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: YONGSANG <71686691+4N3MONE@users.noreply.github.com>
Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: YONGSANG <71686691+4N3MONE@users.noreply.github.com>
Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Fix generate with `inputs_embeds` as input (#32493)

* I think inputs_embeds has ndim == 3

* fix sequence length catch

* add generate test

* [run-slow]olmo, persimmon, gemma, gemma2, qwen2, llama

* skip whisper

* fix bart test

* more fixes

* Fixed test `test_static_cache_exportability` with torch 2.4.0 (#32516)

Workaround the export issue in torch 2.4

Co-authored-by: Guang Yang <guangyang@fb.com>

* Fix code example to load bigcode starcoder2 7b (#32474)

* [docs] Translation guide (#32547)

clarify

* Gemma2: fix FA2 generation (#32553)

fix FA2

* Fix a bug in Qwen2Audio (#32552)

fix _update_model_kwargs_for_generation

* fix slow integration gemma2 test (#32534)

no empty revision

* fix non contiguous tensor value error in save_pretrained (#32422)

Signed-off-by: duzhanwei <duzhanwei@bytedance.com>
Co-authored-by: duzhanwei <duzhanwei@bytedance.com>

* 🌐 [i18n-KO] Translated `agent.md` to Korean (#32351)

* docs: ko: main_classes/agent

* feat: chatgpt draft

* fix: manual edits

* �fix: resolve suggestions

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: thsamaji <60818655+thsamajiki@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* fix: resolve suggestions

* fix: resolve code line number

---------

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: thsamaji <60818655+thsamajiki@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* Add new model (#32615)

* v1 - working version

* fix

* fix

* fix

* fix

* rename to correct name

* fix title

* fixup

* rename files

* fix

* add copied from on tests

* rename to `FalconMamba` everywhere and fix bugs

* fix quantization + accelerate

* fix copies

* add `torch.compile` support

* fix tests

* fix tests and add slow tests

* copies on config

* merge the latest changes

* fix tests

* add few lines about instruct

* Apply suggestions from code review

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix

* fix tests

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Fix: FA2 with packed training (#32487)

* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment

* Fix sliding window attention used in Gemma2FlashAttention2 (#32522)

* fix sliding window attention (flash2) in gemma2 model

* [run-slow] gemma

* fix slicing attention_mask for flash_attn2

* fix slicing attention_mask when flash_attn is used

* add missing comment

* slice the last seq_len tokens in the key, value states

* revert code of slicing key, value states

* fix: Fixed conditional check for `encodec` model names (#32581)

* Fixed conditional check for encodec model names.

* Reformatted conditional check.

* Fix `.push_to_hub(..., create_pr=True, revision="my-branch")` when creating PR on not-owned repo (#32094)

Fix create_pr aagainst existing revision

* Bump aiohttp from 3.9.4 to 3.10.2 in /examples/research_projects/decision_transformer (#32569)

Bump aiohttp in /examples/research_projects/decision_transformer

Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.9.4 to 3.10.2.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.9.4...v3.10.2)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump torch from 1.13.1 to 2.2.0 in /examples/research_projects/visual_bert (#32220)

Bump torch in /examples/research_projects/visual_bert

Bumps [torch](https://github.com/pytorch/pytorch) from 1.13.1 to 2.2.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/compare/v1.13.1...v2.2.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Cleanup tool calling documentation and rename doc (#32337)

* Rename "Templates for Chat Models" doc to "Chat Templates"

* Small formatting fix

* Small formatting fix

* Small formatting fix

* Cleanup tool calling docs as well

* Remove unneeded 'revision'

* Move tip to below main code example

* Little bonus section on template editing

* 🌐 [i18n-KO] Translated `deepspeed.md` to Korean (#32431)

* Update _toctree.yml

* docs: ko: deepspeed.md

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Update docs/source/ko/_toctree.yml

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/ko/deepspeed.md

* Update docs/source/ko/deepspeed.md

Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Update docs/source/ko/_toctree.yml

---------

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* 🌐 [i18n-KO] Translated `awq.md`to Korean (#32324)

* fix: manual edits

* Apply suggestions from code review

Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
Co-authored-by: Chulhwa (Evan) Han <cjfghk5697@ajou.ac.kr>

* fix:manual edits

- 잘못된 경로에 번역본 파일을 생성해서 옮김

* Delete docs/source/ko/tasks/awq.md

* Update docs/source/ko/_toctree.yml

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
Co-authored-by: Chulhwa (Evan) Han <cjfghk5697@ajou.ac.kr>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* fix: Fixed failing `test_find_base_model_checkpoint` (#32638)

Fixed failing test_find_base_model_checkpoint.

* Bump tensorflow from 2.11.1 to 2.12.1 in /examples/research_projects/decision_transformer (#32341)

Bump tensorflow in /examples/research_projects/decision_transformer

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.11.1 to 2.12.1.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/compare/v2.11.1...v2.12.1)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* "to be not" -> "not to be" (#32636)

* "to be not" -> "not to be"

* Update sam.md

* Update trainer.py

* Update modeling_utils.py

* Update test_modeling_utils.py

* Update test_modeling_utils.py

* fix: Updated the `is_torch_mps_available()` function to include `min_version` argument (#32545)

* Fixed wrong argument in is_torch_mps_available() function call.

* Fixed wrong argument in is_torch_mps_available() function call.

* sorted the import.

* Fixed wrong argument in is_torch_mps_available() function call.

* Fixed wrong argument in is_torch_mps_available() function call.

* Update src/transformers/utils/import_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* removed extra space.

* Added type hint for the min_version parameter.

* Added missing import.

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Expand inputs in processors for VLMs (#30962)

* let it be

* draft

* should not have changed

* add warnings

* fix & add tests

* fix tests

* ipnuts embeds cannot be passed with pixels

* more updates

* paligemma ready!

* minor typos

* update blip-2

* fix tests & raise error

* docstring

* add blip2 test

* tmp

* add image seq length to config

* update docstring

* delete

* fix tests

* fix blip

* fix paligemma

* out-of-place scatter

* add llava-next-video

* Update src/transformers/models/blip_2/modeling_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* remove tmp

* codestyle

* nits

* more nits

* remove overriding in tests

* comprehension when merging video

* fix-copies

* revert changes for embeds test

* fix tests after making comprehension

* Update src/transformers/models/blip_2/processing_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* Update src/transformers/models/blip_2/processing_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* more updates

* fix tests

---------

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* Automatically add `transformers` tag to the modelcard (#32623)

* Automatically add `transformers` tag to the modelcard

* Specify library_name and test

* Fix tests (#32649)

* skip failing tests

* [no-filter]

* [no-filter]

* fix wording catch in FA2 test

* [no-filter]

* trigger normal CI without filtering

* fix tensors on different devices in `WhisperGenerationMixin` (#32316)

* fix

* enable on xpu

* no manual remove

* move to device

* remove to

* add move to

* Add support for GrokAdamW optimizer (#32521)

* add grokadamw

* reformat

* code review feedback, unit test

* reformat

* reformat

* Add Depth Anything V2 Metric models (#32126)

* add checkpoint and repo names

* adapt head to support metric depth estimation

* add max_depth output scaling

* add expected logits

* improve docs

* fix docstring

* add checkpoint and repo names

* adapt head to support metric depth estimation

* add max_depth output scaling

* add expected logits

* improve docs

* fix docstring

* rename depth_estimation to depth_estimation_type

* add integration test

* Refactored tests to include metric depth model inference test
* Integration test pass when the timm backbone lines are commented (L220-L227)

* address feedback

* replace model path to use organization path

* formatting

* delete deprecated TODO

* address feedback

* [run_slow] depth_anything

* Fix: Fixed directory path for utils folder in `test_tokenization_utils.py` (#32601)

* Removed un-necessary expressions.

* Fixed directory path for utils folder in test_tokenization_utils.py

* Modify ProcessorTesterMixin for better generalization (#32637)

* Add padding="max_length" to tokenizer kwargs and change crop_size to size for image_processor kwargs

* remove crop_size argument in align processor tests to be coherent with base tests

* Add pad_token when loading tokenizer if needed, change test override tokenizer kwargs, remove unnecessary test overwrites in grounding dino

* TF_Deberta supporting mixed precision (#32618)

* Update modeling_tf_deberta.py

Corrected some codes which do not support mixed precision

* Update modeling_tf_deberta_v2.py

Corrected some codes which do not support mixed precision

* Update modeling_tf_deberta_v2.py

* Update modeling_tf_deberta.py

* Add files via upload

* Add files via upload

* Fix tests recurrent (#32651)

* add fix for recurrentgemma

* [no-filter]

* trigger-ci

* [no-filter]

* [no-filter]

* attempt to fix mysterious zip error

* [no-filter]

* fix lookup error

* [no-filter]

* remove summarization hack

* [no-filter]

* Support MUSA (Moore Threads GPU) backend in transformers (#31913)

Add accelerate version check, needs accelerate>=0.33.0

* fix: Fixed failing tests in `tests/utils/test_add_new_model_like.py` (#32678)

* Fixed failing tests in tests/utils/test_add_new_model_like.py

* Fixed formatting using ruff.

* Small nit.

* Update translation docs review (#32662)

update list of people to tag

* Add TorchAOHfQuantizer (#32306)

* Add TorchAOHfQuantizer

Summary:
Enable loading torchao quantized model in huggingface.

Test Plan:
local test

Reviewers:

Subscribers:

Tasks:

Tags:

* Fix a few issues

* style

* Added tests and addressed some comments about dtype conversion

* fix torch_dtype warning message

* fix tests

* style

* TorchAOConfig -> TorchAoConfig

* enable offload + fix memory with multi-gpu

* update torchao version requirement to 0.4.0

* better comments

* add torch.compile to torchao README, add perf number link

---------

Co-authored-by: Marc Sun <marc@huggingface.co>

* Fix `JetMoeIntegrationTest` (#32332)

JetMoeIntegrationTest

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Update the distributed CPU training on Kubernetes documentation (#32669)

* Update the Kubernetes CPU training example

* Add namespace arg

Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>

---------

Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>

* fix: Fixed unknown pytest config option `doctest_glob` (#32475)

Fixed unknown config option doctest_glob.

* Unpin deepspeed in Docker image/tests (#32572)

Unpin deepspeed

* Updated workflows to the latest versions (#32405)

Updated few workflows to the latest versions.

* reopen: llava-next fails to consider padding_side during Training (#32679)

restore #32386

* fix: Corrected ` falcon-mamba-7b` model checkpoint name (#32837)

Corrected the model checkpoint.

* fix: update doc link for runhouse in README.md (#32664)

* VLMs: small clean-up for cache class (#32417)

* fix beam search in video llava

* [run-slow] video_llava

* add back the position ids (#32554)

* add back the position ids

* fix failing test

* Use head_dim if in config for RoPE (#32495)

* use head_dim if in config for RoPE

* typo

* simplify with getattr

* Generate: unify `LogitsWarper` and `LogitsProcessor` (#32626)

* [tests] make test_sdpa_equivalence device-agnostic (#32520)

* fix on xpu

* [run_all]

* Cache: use `batch_size` instead of `max_batch_size` (#32657)

* more precise name

* better docstrings

* Update src/transformers/cache_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Fix AutoConfig and AutoModel support for Llava-Next-Video (#32844)

* Fix: fix all model_type of Llava-Next-Video to llava_next_video

* Fix doc for llava_next_video

* * Fix formatting issues
* Change llava-next-video.md file name into llava_next_video.md to make it compatible with implementation

* Fix docs TOC for llava-next-video

* improve _get_is_as_tensor_fns (#32596)

* improve _get_is_as_tensor_fns

* format

* Revert PR 32299, flag users when Zero-3 was missed (#32851)

Revert PR 32299

* fix multi-gpu with static cache (#32543)

* Reduce the error log when using core models that need their weights renamed, and provide a step forward (#32656)

* Fin

* Modify msg

* Finish up nits

* Make beam_constraints.Constraint.advance() docstring more accurate (#32674)

* Fix beam_constraints.Constraint.advance() docstring

* Update src/transformers/generation/beam_constraints.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* generate: missing `to` in DoLa body, causing exceptions in multi-gpu generation (#32856)

* Add Flax Dinov2 (#31960)

* tfmsenv restored in main

* installed flax

* forward pass done and all tests passed

* make fix-copies and cleaning the scripts

* fixup attempt 1

* fixup attempt 2

* fixup third attempt

* fixup attempt 4

* fixup attempt 5

* dinov2 doc fixed

* FlaxDinov2Model + ForImageClassification added to OBJECTS_TO_IGNORE

* external pos_encoding layer removed

* fixup attempt 6

* fixed integration test values

* fixup attempt 7

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/tran…
Cemberk added a commit to Cemberk/transformers that referenced this pull request Nov 14, 2024
* Added mamba.py backend (#30139)

* Update README.md

* tests: forward ok

* backward test done

* done testing

* removed check. scripts

* Update README.md

* added use_mambapy arg

* fixed typo in warning

* protected imports w/ mambapy package

* delete pscan.py + raise rather than assert

* Update import_utils.py

* fix whitespaces and unused import

* trailing whitespace + import block unformatted

* Update modeling_mamba.py

* transpose before pscan

* shape comment

* ran make style

* use_mambapy=False by default

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* ran make fix-copies

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Rename Phi-3 rope scaling type (#31436)

* renamed phi3 rope_scaling type

* fixed trailing whitespaces

* fixed test

* added warning

* fixed format

* Revert "Incorrect Whisper long-form decoding timestamps " (#32148)

Revert "Incorrect Whisper long-form decoding timestamps  (#32003)"

This reverts commit cd48553fc8375e1a28d4d82cfe231dedf6a23af8.

* Fix typing to be compatible with later py versions (#32155)

* feat(cache): StaticCache uses index_copy_ to avoid useless copy (#31857)

* feat(cache): StaticCache uses index_copy_ to avoid useless copy

Using index_copy_ allows for explicit in-place change of the tensor.
Some backends (XLA) will otherwise copy the tensor, making the code
slower and using more memory.

Proposed implementation will end up using less memory and on XLA will
result in less compilation, but the change is also quite generic, making
no change whatsoever on CUDA or CPU backend.

* feat(cache): SlidingWindowCache uses index_copy_ to avoid useless copy

Applying the same change done in StaticCache.

* fix(cache): fallback of index_copy_ when not implemented

* fix(cache): in index_copy_ ensure tensors are on same device

* [run slow] llama

* fix(cache): add move of cache_position to same device in SlidingWindowCache

* Revert "[run slow] llama"

This reverts commit 02608dd14253ccd464e31c108e0cd94364f0e8b9.

* Added additional kwarg for successful running of optuna hyperparameter search (#31924)

Update integration_utils.py

Added additional kwarg

* Enhancing SFT Training Efficiency Using Packing and FlashAttention2 with Position IDs (#31629)

* add DataCollatorBatchFlattening

* Update data_collator.py

* change name

* new FA2 flow if position_ids is provided

* add comments

* minor fix

* minor fix data collator

* add test cases for models

* add test case for data collator

* remove extra code

* formating for ruff check and check_repo.py

* ruff format

ruff format tests src utils

* custom_init_isort.py

* Updated `ruff` to the latest version (#31926)

* Updated ruff version and fixed the required code accorindg to the latest version.

* Updated ruff version and fixed the required code accorindg to the latest version.

* Added noqa directive to ignore 1 error shown by ruff

* Dev version: v4.44.0.dev0

* Llama 3.1 conversion

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>

* fix (#32162)

* fix: Fixed an if condition that is always evaluating to true (#32160)

Fixed an if condition always evaluating to true.

* [docs] change temperature to a positive value (#32077)

fix

* adds: extra_repr() to MambaRMSNorm to include hidden size / size of weights in the layer (#32171)

* adds: extra_repr() to MambaRMSNorm to include the hidden size of the layer

* style fix with ruff:

* fix: default value reflects the runtime environment variables rather than the ones present at import time. (#32153)

* fix: default value reflects the runtime environment variables rather than the ones present at import time.

* Fix: Change `deterministic` to None by default; use env var if None

* Update qwen2.md (#32108)

* Update qwen2.md

outdated description

* Update qwen2.md

amended

* Update qwen2.md

Update

* Update qwen2.md

fix wrong version code, now good to go

* Remove conversational pipeline tests (#32099)

Remove conversation pipeline tests

* RoPE: relaxed rope validation (#32182)

* relaxed rope check

* lets also accept rope_type=None, defaulting to the original implementation

* type and rope_type can coexist

* let's not warn when someone is running a forward  (#32176)

* let's not warn when someone is running a foward without cache + self.training

* more models

* fixup

* Fix resize embedding with Deepspeed (#32192)

fix resize when deepspeed

* Fix float8_e4m3fn in modeling_utils (#32193)

* Fix float8_e4m3fn in modeling_utils

* style

* fix

* comment

* Support dequantizing GGUF FP16 format (#31783)

* support gguf fp16

* support gguf bf16 with pytorch

* add gguf f16 test

* remove bf16

* :rotating_light: No more default chat templates (#31733)

* No more default chat templates

* Add the template to the GPT-SW3 tests since it's not available by default now

* Fix GPT2 test

* Fix Bloom test

* Fix Bloom test

* Remove default templates again

* fix: Replaced deprecated `unittest method` with the correct one (#32198)

Replaced deprecated unittest method with the correct one.

* [whisper] fix short-form output type (#32178)

* [whisper] fix short-form output type

* add test

* make style

* update long-form tests

* fixes

* last fix

* finalise test

* remove unnecessary guard code related with pytorch versions 1.4.2 ~ 1.7.0 (#32210)

remove unnecessary guard code related with pytorch versions 1.4.2 ~
1.7.0

* Update question_answering.py (#32208)

* [BigBird Pegasus] set _supports_param_buffer_assignment to False (#32222)

set _supports_param_buffer_assignment to False

* [warnings] fix E721 warnings (#32223)

fix E721 warnings

* Follow up for #31973 (#32025)

* fix

* [test_all] trigger full CI

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* translate philosophy.md to chinese (#32177)

* translate philosophy.md to chinese

* add the missing link

* Allow a specific microphone to be used by the ffmpeg audio pipeline utility functions. Default to using the currently active microphone on Mac (#31846)

* use currently active microphone on mac for ffmpeg_microphone

* Allow ffmpeg_microphone device to be specified

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Fix code snippet for Grounding DINO (#32229)

Fix code snippet for grounding-dino

* Generation: stop at `eos` for assisted decoding (#31301)

* fix

* move changes to prompt lookup

* add test

* set eos in assistant model

* style

* fix flakiness

* changes for new `main`

* Update tests/generation/test_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update tests/generation/test_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* add comment to explain

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Llava: generate without images (#32183)

* llava w/o images

* tests

* Resize embeds with DeepSpeed  (#32214)

* fix resize when deepspeed

* deepsped uses new embeds

* we needed this

* don't log base model architecture in wandb if log model is false (#32143)

* don't log base model architecture in wandb is log model is false

* Update src/transformers/integrations/integration_utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* convert log model setting into an enum

* fix formatting

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Refactor: Removed un-necessary `object` base class (#32230)

* Refactored to remove un-necessary object base class.

* small fix.

* Adds: extra_repr for RMSNorm layers in most models (#32204)

* adds: extra_repr() to RMSNorm layers in multiple models

* adds: extra_repr for deprecated models as well

* formatting as per style guide

* Add check for `target_sizes is None` in `post_process_image_guided_detection` for owlv2 (#31934)

* Add check for target_sizes is None in post_process_image_guided_detection

* Make sure Owlvit and Owlv2 in sync

* Fix incorrect indentation; add check for correct size of target_sizes

* [tests] fix `static` cache implementation is not compatible with `attn_implementation==flash_attention_2` (#32039)

* add flash attention check

* fix

* fix

* Flash-Attn: fix generation when no attention mask or no pading (#32241)

* fix

* fix prev test (half of failures)

* [run-slow] llama, gemma2

* [run-slow] llama, gemma2

* More flexible trigger condition (#32251)

update

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Llama 3.1: replace for loop by tensor ops at inv_freq initialization (#32244)

* replace for loop by tensor ops

* rm assert; readability

* 🚨 Bloom support for cache class (#31445)

* bloom dynamic cache

* bloom follows standard cache format

* no skips for bloom anymore

* use cache position when possible

* clean up

* codestyle

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* pr comments

* isinstance fix

* address comments

* make musicgen test happy

* [run-slow] bloom

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Upload new model failure report to Hub (#32264)

upload

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Optimize t5 tokenize logic to avoid redundant calls (#32270)

* Optimize t5 tokenize logic to avoid redundant calls

* fix and overwrite copies

* fix: Fixed wrong argument passed to `convert_blip_checkpoint` function call (#32262)

Removed one wrong argument passed to convert_blip_checkpoint function call.

* Repo: remove exceptions in `check_docstrings` (#32259)

remove exceptions

* make `p_mask` a numpy array before passing to `select_starts_ends` (#32076)

* fix

* bug fix

* refine

* fix

* fix(docs): Fixed a link in docs (#32274)

Fixed a link in docs.

* Generate: end-to-end compilation (#30788)

* mvp

* added test (a few models need fixes)

* fix a few test cases

* test nits

* harder test 😈

* revert changes in stablelm

* test with improved condition

* add todo

* tmp commit

* merged with main

* nits

* add todo

* final corrections

* add docs for generation compilation

* docs nits

* add  tip

* PR suggestions

* add more details to the compilation docs

* fix cache positions

* cache is now init in generate; update docs

* tag test as flaky

* docs

* post rebase make fixup and other nits

* remove unintended changes

* whisper (encoder-decoder) not supported

* move token default updates to ; add tests for token defaults

* push changes

* manual rebase

* chameleon doesn't support this

* fix test_static_cache_mha_mqa_gqa (broken in another PR)

* docs: dynamic is better with end-to-end compilation

* Whisper tokenizer word level timestamps (#32197)

* fix _fix_key in PreTrainedModel

* fix _find_longest_common_sequence

* add test

* remove result.json

* nit

* update test

* [pipeline] fix padding for 1-d tensors (#31776)

* [pipeline] fix padding for 1-d tensors

* add test

* make style

* Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

* Update tests/pipelines/test_pipelines_automatic_speech_recognition.py

---------

Co-authored-by: Kamil Akesbi <45195979+kamilakesbi@users.noreply.github.com>

* Make static cache compatible with torch.export (#32168)

* Add stream messages from agent run for gradio chatbot (#32142)

* Add stream_to_gradio method for running agent in gradio demo

* use torch 2.4 in 2 CI jobs (#32302)

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Docs: fix GaLore optimizer code example (#32249)

Docs: fix GaLore optimizer example

Fix incorrect usage of GaLore optimizer in Transformers trainer code example.

The GaLore optimizer uses low-rank gradient updates to reduce memory usage. GaLore is quite popular and is implemented by the authors in [https://github.com/jiaweizzhao/GaLore](https://github.com/jiaweizzhao/GaLore). A few months ago GaLore was added to the HuggingFace Transformers library in https://github.com/huggingface/transformers/pull/29588.

Documentation of the Trainer module includes a few code examples of how to use GaLore. However, the `optim_targe_modules` argument to the `TrainingArguments` function is incorrect, as discussed in https://github.com/huggingface/transformers/pull/29588#issuecomment-2006289512. This pull request fixes this issue.

* Fix GGUF dequantize for `gguf==0.9.1` (#32298)

* fix gguf dequantize for gguf==0.9.1

* fix old version

* make style

* Cast epochs_trained to int when resuming training (#32286)

* fix epochs_trained as int when resuming training

* refactor

---------

Co-authored-by: teddyferdinan <teddy.ferdinan@pwr.edu.pl>

* feat(ci): set `fetch-depth: 0` in trufflehog checkout step (#31663)

* Fix M4T for ASR pipeline (#32296)

* tentative fix

* do the same for M4T

* Docs: formatting nits (#32247)

* doc formatting nits

* ignore non-autodocs

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/esm/modeling_esm.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/esm/modeling_esm.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* make fixup

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Alternative agent plan (#32295)

* new agent plan

* plan type assertion

* style corrections

* better prompt naming

* make fixup

* fix: Added missing raise keyword for few exceptions (#32333)

Fixed raising of few exceptions.

* fixes to properly shard FSDP across cpu and meta for cpu_efficient_loading for prequantized 4bit (#32276)

* fixes #32329 : The Torch code is correct - to get an average of 10% o… (#32335)

fixes #32329 : The Torch code is correct - to get an average of 10% of the total, we want to take 50% of the remainder after we've already masked 80% with [MASK] in the previous step.

* Repo checks: skip docstring checks if not in the diff (#32328)

* tmp

* skip files not in the diff

* use git.Repo instead of an external subprocess

* add tiny change to confirm that the diff is working on pushed changes

* add make quality task

* more profesh main commit reference

* Fix slow GemmaTokenizer and improve SPM slow -> fast conversion process (#32191)

* Remove user-defined tokens which can be obtained through merges

* Remove debug line

* formatting

* Refactor spm slow -> fast converter

* revert unnecessary refactor

* set comprehension

* remove test files

* Use `vocab_scores`

* Always replace spiece underline with space in decode

* we no longer need token filtering

* Add save fast load slow unit test

* Remove tokenizers version check

* Remove duplicate code

* Make `<start_of_turn>` and `<end_of_turn>` special tokens

* Bias merge priority with length if score is the same

* Add unit test for merge priority

* CI

* LLaVA-NeXT: fix anyres shapes (#32314)

fix

* Gemma2 and flash-attention (#32188)

* enable flash-attn & static cache

* this works, not the prev

* fix for sliding window layers

* not needed anymore

* Llama 3.1: Fix incorrect `inv_freq` assignment (#32330)

fix 💩

* [Idefics2] - Fix FA2 call for Perceiver layer (#32275)

* Fix FA2 call for Perciever layer

* [run_slow] idefics2

* [run_slow] idefics2

* [run_slow] idefics2

* Fix up

* [run_slow] idefics2

* [run_slow] idefics2

* [run_slow] idefics2

* Gemma 2: support assisted generation (#32357)

* Fix error when streaming to gradio with non-string tool arguments (#32360)

Fix error when streaming agent run to gradio with non-string tool arguments

* >3-5x faster torch.compile forward compilation for autoregressive decoder models (#32227)

* draft

* apply changes to all relevant archs

* rerun ci - check_docstrings.py failing?

* fix docstring

* move 2D->4D mask creation to modeling file

* repo consistency

* fix the batch size = 1 case - calling contiguous is not enough

* nit

* style

* propagate to gemma/gemma-2

* prepare inputs for gemma generation

* implement test and tiny fix in gemma2

* Update src/transformers/models/bloom/modeling_bloom.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix copies

* ci pass

* fix gemma's test_compile_static_cache tests

* flacky

* retrigger ci

---------

Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix: Removed unnecessary `@staticmethod` decorator (#32361)

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* Fixed staticmethods with self as first argument.

* fix: warmup_steps check for training_args (#32236)

* LLaVa: add cache class attribute (#32278)

cache class flag

* [enc-dec cache] fix bug in indexing (#32370)

* [whisper] compile compatibility with long-form decoding (#31772)

* [whisper] compile compatibility with long-form decoding

* clarify comment

* fix after rebase

* finalise

* fix bsz

* fix cache split

* remove contiguous

* style

* finish

* update doc

* prevent cuda graph trace

* Remove size check between attn_weights and kv_seq_len for phi3 (#32339)

* Remove size check between attn_weights and kv_seq_len

* add unit tests

* add missing attribute _supports_param_buffer_assignment for gpt-j. (#32359)

Co-authored-by: Guoming Zhang <37257613+nv-guomingz@users.noreply.github.com>

* Check device map for saving tokenizer config on TPU (fix for issue #31971) (#32043)

* Remove TPU device map for saving tokenizer config

* Update tokenization_utils_base.py

* Fix error msg when passing non-string device into tokenizer

* Fix error message for non-string tokenizer device

* Print out tokenizer device type in error msg

* Update tokenization_utils_base.py

* update clean_up_tokenization_spaces warning (#32371)

* Empty list in defaults for LLaMA special tokens during weights conversion (#32342)

empty list in defaults

* Fix conflicting key in init kwargs in PreTrainedTokenizerBase (#31233)

* Fix conflicting key in init kwargs in PreTrainedTokenizerBase

* Update code to check for callable key in save_pretrained

* Apply PR suggestions

* Invoke CI

* Updates based on PR suggestion

* Offloaded KV Cache (#31325)

* Initial implementation of OffloadedCache

* enable usage via cache_implementation

* Address feedback, add tests, remove legacy methods.

* Remove flash-attn, discover synchronization bugs, fix bugs

* Prevent usage in CPU only mode

* Add a section about offloaded KV cache to the docs

* Fix typos in docs

* Clarifications and better explanation of streams

* Docker: add `speech` dep to the consistency docker image (#32374)

* Fixed Hybrid Cache Shape Initialization. (#32163)

* fixed hybrid cache init, added test

* Fix Test Typo

---------

Co-authored-by: Aaron Haag <aaron.haag@siemens.com>

* Yell at the user if zero-3 init wasn't performed, but expected to have been done (#32299)

* Test this zach

* Test for improper init w/o zero3

* Move back

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Get rid of stars in warning

* Make private

* Make clear

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs (#32368)

nits

* RoPE: Add numerical tests ✨  (#32380)

tests! :D

* [generate] only require an attention mask for mps with torch<2.4 (#32367)

* up

* style

* stopping

* fix: (issue #32124) Exception raised when running `transformers/examples/flax/language-modeling/t5_tokenizer_model.py`. (#32157)

fix: Exception raised when running .

* MixtralFlashAttention2: put "plus 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. (#31500)

* Mixtral: remove unnecessary plus 1 when calculating rotary_seq_len, allowing position_ids=None (no auto position_ids generation could be unsafe)

* fix typo [:-1] to [:, -1]

* to meet formatting requirement

* to meet formatting requirement

* remove white space

* MixtralFlashAttention2: put "+ 1" inside parentheses when calculating rotary_seq_len, allowing None position_ids input. Fix format/style issue.

* propagate to startcoder2, phi3, mixtral and qwen2

* update qwen2_moe

* Bump keras from 2.8.0 to 2.13.1 in /examples/research_projects/decision_transformer (#32393)

Bump keras in /examples/research_projects/decision_transformer

Bumps [keras](https://github.com/keras-team/keras) from 2.8.0 to 2.13.1.
- [Release notes](https://github.com/keras-team/keras/releases)
- [Commits](https://github.com/keras-team/keras/compare/v2.8.0...v2.13.1)

---
updated-dependencies:
- dependency-name: keras
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: SeamlessM4TFeatureExtractor stride remainder (#32088)

* fix: SeamlessM4TFeatureExtractor stride remainder

* Added attention mask size test

* Reran ruff for style correction

* Phi3 tests: fix typing for Python 3.8 (#32388)

fix phi

* #32184 save total_vocab_size (#32240)

* save total_vocab_size = vocab_size + user added tokens to speed up operation

* updating length when added_tokens_decoder is set

* add test len(tokenizer)

* add values for neftune (#32399)

I always forget what typical values are, and I have to look at the paper everytime. This will be a helpful reminder.

* Fix documentation references to google/bit-50 model (#32407)

* Persist embedding type of BART and mBART models after resize (#32242)

* fix: persist embedding type of MBartConditonalGeneration after resize

* fix: persist embedding type of BartConditonalGeneration after resize

* fix: Updated `test_embeded_special_tokens` for luke and mluke models (#32413)

Fixed tokenizertests for luke, mluke models.

* Respect the config's attn_implementation if set (#32383)

* Respect the config's attn if set

* Update test - can override in from_config

* Fix

* Fix documentation links and code reference to model llava-next (#32434)

* Cache: create docs (#32150)

* draft

* updates

* works?

* try adding python example in hidden section

* another try

* hwo do i render python

* format as html code?

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update docs/source/en/kv_cache.md

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* one more small update

* should render hidden secrtion now

* add outputs

* fix links

* check links

* update all links

* update with offloaded cache

* all cache is importable, so they appear in docs

* fix copies

* docstring...

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Llava: fix checkpoint_doc (#32458)

fix: add new llava like model bug

* add the missing flash attention test marker (#32419)

* add flash attention check

* fix

* fix

* add the missing marker

* bug fix

* add one more

* remove order

* add one more

* Update kwargs validation for `preprocess` with decorator (#32024)

* BLIP preprocess

* BIT preprocess

* BRIDGETOWER preprocess

* CHAMELEON preprocess

* CHINESE_CLIP preprocess

* CONVNEXT preprocess

* DEIT preprocess

* DONUT preprocess

* DPT preprocess

* FLAVA preprocess

* EFFICIENTNET preprocess

* FUYU preprocess

* GLPN preprocess

* IMAGEGPT preprocess

* INTRUCTBLIPVIDEO preprocess

* VIVIT preprocess

* ZOEDEPTH preprocess

* VITMATTE preprocess

* VIT preprocess

* VILT preprocess

* VIDEOMAE preprocess

* VIDEOLLAVA

* TVP processing

* TVP fixup

* SWIN2SR preprocess

* SIGLIP preprocess

* SAM preprocess

* RT-DETR preprocess

* PVT preprocess

* POOLFORMER preprocess

* PERCEIVER preprocess

* OWLVIT preprocess

* OWLV2 preprocess

* NOUGAT preprocess

* MOBILEVIT preprocess

* MOBILENETV2 preprocess

* MOBILENETV1 preprocess

* LEVIT preprocess

* LAYOUTLMV2 preprocess

* LAYOUTLMV3 preprocess

* Add test

* Update tests

* Fix get large model config for Switch Transformer encoder only tester (#32438)

* Dependencies: fix typo (#32389)

deps_2

* Add Nemotron HF Support (#31699)

* Add nemotron support

* fix inference

* add unit test

* add layernorm1p as a class to avoid meta device mismatch

* test fixed

* Add copied_from statements

* remove pretraining_tp args

* remove nemotronlayernorm

* force LN computation done in FP32

* remove nemotrontokenizer and use llamatokenizer

* license update

* add option for kv_channels for minitron8b

* remove assert

* o_proj fixed

* o_proj reshape

* add gated_proj option

* typo

* remove todos

* fix broken test after merging latest main

* remove nezha/nat after meging main

* chnage default config to 15b model

* add nemo conversion script

* rename conversion script

* remove gate_proj option

* pr comment resolved

* fix unit test

* rename kv_channels to head_dim

* resolve PR issue

* add nemotron md

* fix broken tests

* refactor rope for nemotron

* test fix

* remove linearscaling

* whitespace and import

* fix some copied-from

* code style fix

* reformatted

* add position_embedding to nemotronattention

* rope refactor to only use config, copied-from fix

* format

* Run make fix-copies

* nemotron md with autodoc

* doc  fix

* fix order

* pass check_config_docstrings.py

* fix config_attributes

* remove all llama BC related code

* Use PreTrainedTokenizerFast

* ruff check examples

* conversion script update

* add nemotron to toctree

* Generate: fix end to end compilation (#32465)

* Add codestral mamba2 (#32080)

* add new model like

* draft cuda forward - mismatched keys (sharding on conv1)

* match keys successfully

* fix split

* get generation/forward running (wrong gens, norm?)

* :update

* some refactoring

* fixes

* works up until copy to cache

* fix

* update

* NON WORKING VERSION

* version that work?

* nit

* fix config

* fix conversion script

* working cuda forward

* nit

* update

* simplifcation

* make mamba slow simple work

* no einops

* todo

* fix style

* no einops

* update fix no einsum

* nit

* remove einops

* bug: scan_output differs strongly

* add rms norm option

* fix fast + slow generation with and w/o cache :heavy_check_mark:

* draft integration tests

* remove a big chunk of the einsum

* fix slow, fast generations, without any einsum

* fix copies

* fix structure

* fix up modeling and tests

* fix tests

* clamping is indeed worse

* recover mamba2 cache test

* fix copies

* no cache position (yet)

* fix tf tests

* fix matmul for generate

* fixup

* skip cache tests for now

* [run-slow]mamba2

* tune out hidden states for padding

* test batched generation

* propagate attention mask changes

* fix past length

* fix integration test

* style

* address comments

* update readme

* add mamba2 version check

* fix tests

* [run-slow]mamba2

* skip edge tests

* [run-slow]mamba2

* last fixup

* [run-slow]mamba2

* update README

---------

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>

* Migrate import checks not need accelerate, and be more clear on min versions (#32292)

* Migrate import checks to secondary accelerate calls

* better errs too

* Revert, just keep the import checks + remove accelerate-specific things

* Rm extra'

* Empty commit for ci

* Small nits

* Final

* Documentation: BOS token_id deprecation change for NLLB (#32443)

Update nllb.md

* dev version 4.45.0

* `is_torchdynamo_compiling` -- cast a wide exception net (#32476)

* cast a wide net

* make fix-copies with a few manual changes

* add copied from

* Revert "fixes to properly shard FSDP across cpu and meta for cpu_effcient_loading for prequantized 4bit (#32276)" (#32477)

* Revert "fixes to properly shard FSDP across cpu and meta for cpu_efficient_loading for prequantized 4bit (#32276)"

This reverts commit 62c60a30181a65e1a3a7f19c3055a240a6a21335.

We uncovered an issue with this change that caused our training runs to hang.

* `is_torchdynamo_compiling` -- cast a wide exception net (#32476)

* cast a wide net

* make fix-copies with a few manual changes

* add copied from

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* 🌐 [i18n-KO] Translated `mask_generation.md` to Korean (#32257)

* docs: ko: tasks/mask_generation.md

* feat: nmt draft

* fix : toc local

* fix : manual edits

* fix : ko-toctree

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions

* fix: resolve suggestions

* fix: resolve suggestions

---------

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* 🌐 [i18n-KO] Translated `idefics.md` to Korean (#32258)

* docs: ko: tasks/idefics.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `image_to_image.md` to Korean (#32327)

* docs: ko: tasks/image_to_image.md

* feat: nmt draft

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>
Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* fix: handle remaining suggestions

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

---------

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>
Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* Cache: new Cache format in decoder-only models (#31421)

* draft bart with new cache

* add cache for decoder-only models

* revert utils

* modify docstring

* revert bart

* minor fixes

* fix copies (not related)

* revert tests

* remove enc-dec related code

* remove bloom

* remove opt (enc-dec)

* update docstring

* git, codegen, gpt_neo, gpt_neox, gpj

* clean up

* copied from statements

* revert

* tmp

* update warning msg

* forgot git

* add more flags

* run-slow git,codegen,gpt_neo,gpt_neox,gpj

* add cache flag to VLMs

* remove files

* style

* video LLMs also need a flag

* style

* llava will go in another PR

* style

* [run-slow] codegen, falcon, git, gpt_neo, gpt_neox, gptj, idefics

* Update src/transformers/models/gpt_neo/modeling_gpt_neo.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* copy from

* deprecate until v4.45 and warn if not training

* nit

* fix test

* test static cache

* add more tests and fix models

* fix copies

* return sliding window mask

* run slow tests & fix + codestyle

* one more falcon fix for alibi

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Gemma2: add cache warning (#32279)

* gemma2 fallback to dynamic cache

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* raise error and dont fallback to dynamic cache

* prev will break most forward calls/tests

* Update src/transformers/models/gemma2/modeling_gemma2.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* update

* fix copies

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* enable xla fsdp (#32048)

* enable xla fsdp

* add acceleration version check for xla fsdp

* Fix typo in tokenization_utils_base.py (#32484)

* Agents use grammar (#31735)

* Allow optional use of grammars to constrain generation

* fix broken link in docs (#32491)

`https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TextGenerationPipeline.__call__`

`generate_kwargs (dict, optional) — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).`

link in "here" doesnt work

* Docs: alert for the possibility of manipulating logits (#32467)

* logits

* words

* 🌐 [i18n-KO] Translated `gptq.md` to Korean (#32293)

* fix: manual edits

* fix: manual edits2

* fix: delete files

* fix: resolve suggestions

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `prompting.md` to Korean (#32294)

* docs: ko: tasks/prompting.md

* feat: nmt-draft

* fix: update translation in prompting.md

* fix: update toctree.yml

* fix: manual edits

* fix: toctree edits

* fix: resolve suggestions

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Harheem Kim <49297157+harheem@users.noreply.github.com>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `quantization/quanto.md` to Korean (#32281)

* docs: ko: quantization/quanto.md

* feat: nmt draft

* fix: resolve suggestions

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>

---------

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `image_feature_extraction.md` to Korean (#32239)

* docs: ko: tasks/images_feature_extraction.md

* feat: nmt draft

* fix: manual edits

* fix: manual edits

* fix: manual edits

* fix: manual edits

* feat: manual edits

* Update docs/source/ko/tasks/image_feature_extraction.md

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* Update docs/source/ko/tasks/image_feature_extraction.md

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* fix: manual edits

---------

Co-authored-by: Jihun Lim <31366038+heuristicwave@users.noreply.github.com>

* Fix references to model google mt5 small (#32497)

* Docs: Fixed WhisperModel.forward’s docstring link (#32498)

Fixed WhisperModel.forward’s docstring link.

* 🌐 [i18n-KO] Translated `chat_templating.md` to Korean (#32362)

* docs: ko: chat_templating.md

* feat: nmt draft

* fix: manual edits

* Update docs/source/ko/chat_templating.md

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* Update docs/source/ko/chat_templating.md

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* fix: apply suggestions from code review - anchor

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>

* fix: manual edits

Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* fix: manual edits

* fix: delete 'default template' section

---------

Co-authored-by: Sungmin Oh <fabxoe.kor@gmail.com>
Co-authored-by: SeungYoun Lee <84276596+win2dvp21@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* Fix link to autoclass_tutorial.md in i18n.md (#32501)

* Fix typo: depracted -> deprecated (#32489)

Hello!

## Pull Request overview
* Fix typo

## Details
This should speak for itself.

cc @itazap @ArthurZucker 

- Tom Aarsen

* Fix issue #32518: Update llm_tutorial.md (#32523)

Update llm_tutorial.md

remove comma re: issue 32518

https://github.com/huggingface/transformers/issues/32518

* Change Phi3 `_supports_sdpa` to True (#32457)

* Change `_supports_sdpa` to True

* add phi3 to sdpa support list

* Uniformize kwargs for processors - GroundingDINO (#31964)

* fix typo

* uniform kwargs

* make style

* add comments

* remove return_tensors

* remove common_kwargs from processor since it propagates

* make style

* return_token_type_ids to True

* revert the default imagekwargs since does not accept any value in the image processro

* revert processing_utils.py

* make style

* add molbap's commit

* fix typo

* fix common processor

* remain

* Revert "add molbap's commit"

This reverts commit a476c6ee88318ce40d73ea31e2dc2d4faa8ae410.

* add unsync PR

* revert

* make CI happy

* nit

* import annotationformat

* Fix add-new-model-like (#31773)

* handle (processor_class, None) returned by ModelPatterns

* handle (slow, fast) image processors in add model

* handle old image processor case

* Add Qwen2-Audio (#32137)

* add qwen2audio

* Update check_repo.py

* fix style

* fix test

* fix style

* add model size

* Qwen2AudioEncoderModel->Qwen2AudioEncoder; add copy info

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* switch the attention_mask and the feature_attention_mask

* add to PRIVATE_MODELS in check_repo.py; add to MODEL_NAMES_TO_IGNORE in check_table.py

* fix initialization

* update chat_template

* fix consistency issue after copy

* add docstrings to _merge_input_ids_with_audio_features

* add copied from to prepare_inputs_for_generation

* add more details to docs

* rm comment

* add init_std

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* Update src/transformers/models/qwen2_audio/modeling_qwen2_audio.py

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>

* update

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* update tests

* rm ignore_index

* update processor

* rm ffmpeg_read

* Update tests/models/qwen2_audio/test_modeling_qwen2_audio.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update docs/source/en/model_doc/qwen2_audio.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* update

* typo

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* fix quality

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* [run_slow] qwen2_audio

* add official model

---------

Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* filter flash_attn optional imports loading remote code (#30954)

* filter flash_attn optional imports loading remote code

* improve pattern

* fix code style

* Update src/transformers/dynamic_module_utils.py

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>

---------

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `ko-llm_tutorial_optimization.md` to Korean (#32372)

* docs: ko: llm_tutorial_optimization.md

* feat: nmt draft

* fix: manual edits

* Update docs/source/ko/llm_tutorial_optimization.md

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* Update docs/source/ko/llm_tutorial_optimization.md

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>

* fix: resolve suggestions - 1

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>
Co-authored-by: boyunJang <gobook1234@naver.com>

* fix: resolve suggestions - 2

Co-authored-by: boyunJang <gobook1234@naver.com>
Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>

---------

Co-authored-by: Chaewon Song <chaewon1019@ewhain.net>
Co-authored-by: timdalxx <48753785+jeongiin@users.noreply.github.com>
Co-authored-by: boyunJang <gobook1234@naver.com>

* 🌐 [i18n-KO] Translated `trainer.md` to Korean (#32260)

* docs: ko: ko-trainer

* feat: nmt draft

* fix: manual edits

* fix: manual edits

* fix: glossary

* fix: glossary

* Apply suggestions from code review

Co-authored-by: Jinuk <45095330+JinukHong@users.noreply.github.com>
Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>

---------

Co-authored-by: Jinuk <45095330+JinukHong@users.noreply.github.com>
Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `eetq.md` to Korean (#32352)

* docs: ko: quantization/eetq.md

* feat: nmt draft

* fix docs: ko: quantization/eetq.md

* fix docs: ko: quantization/eetq.md

* fix: resolve suggestions

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* fix: resolve suggestions

* fix: resolve suggsetions

---------

Co-authored-by: Jiwook Han <33192762+mreraser@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `fsdp.md` to Korean (#32261)

* docs: ko: fsdp.md

* feat: nmt draft

* fix: manual edits

* Apply suggestions from code review

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>

* fix: resolve suggestions

* Update docs/source/ko/fsdp.md

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>

* Update docs/source/ko/fsdp.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: 김준재 <55151385+junejae@users.noreply.github.com>
Co-authored-by: Minki Kim <100768622+1kmmk1@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* 🌐 [i18n-KO] Translated `bitsandbytes.md` to Korean (#32408)

* docs: ko: quantization/bitsandbytes.md

* feat: nmt draft

* fix: minor typos

* fix: manual edits

* fix: manual edits

* fix: resolve suggestions

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: YONGSANG <71686691+4N3MONE@users.noreply.github.com>
Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>

* fix: resolve suggestions

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: YONGSANG <71686691+4N3MONE@users.noreply.github.com>
Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Fix generate with `inputs_embeds` as input (#32493)

* I think inputs_embeds has ndim == 3

* fix sequence length catch

* add generate test

* [run-slow]olmo, persimmon, gemma, gemma2, qwen2, llama

* skip whisper

* fix bart test

* more fixes

* Fixed test `test_static_cache_exportability` with torch 2.4.0 (#32516)

Workaround the export issue in torch 2.4

Co-authored-by: Guang Yang <guangyang@fb.com>

* Fix code example to load bigcode starcoder2 7b (#32474)

* [docs] Translation guide (#32547)

clarify

* Gemma2: fix FA2 generation (#32553)

fix FA2

* Fix a bug in Qwen2Audio (#32552)

fix _update_model_kwargs_for_generation

* fix slow integration gemma2 test (#32534)

no empty revision

* fix non contiguous tensor value error in save_pretrained (#32422)

Signed-off-by: duzhanwei <duzhanwei@bytedance.com>
Co-authored-by: duzhanwei <duzhanwei@bytedance.com>

* 🌐 [i18n-KO] Translated `agent.md` to Korean (#32351)

* docs: ko: main_classes/agent

* feat: chatgpt draft

* fix: manual edits

* �fix: resolve suggestions

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: thsamaji <60818655+thsamajiki@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* fix: resolve suggestions

* fix: resolve code line number

---------

Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: thsamaji <60818655+thsamajiki@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* Add new model (#32615)

* v1 - working version

* fix

* fix

* fix

* fix

* rename to correct name

* fix title

* fixup

* rename files

* fix

* add copied from on tests

* rename to `FalconMamba` everywhere and fix bugs

* fix quantization + accelerate

* fix copies

* add `torch.compile` support

* fix tests

* fix tests and add slow tests

* copies on config

* merge the latest changes

* fix tests

* add few lines about instruct

* Apply suggestions from code review

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix

* fix tests

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Fix: FA2 with packed training (#32487)

* fix check

* add tests

* [run-slow] llama, gemma2

* oops, whisper actually runs but needed some special treatment

* Fix sliding window attention used in Gemma2FlashAttention2 (#32522)

* fix sliding window attention (flash2) in gemma2 model

* [run-slow] gemma

* fix slicing attention_mask for flash_attn2

* fix slicing attention_mask when flash_attn is used

* add missing comment

* slice the last seq_len tokens in the key, value states

* revert code of slicing key, value states

* fix: Fixed conditional check for `encodec` model names (#32581)

* Fixed conditional check for encodec model names.

* Reformatted conditional check.

* Fix `.push_to_hub(..., create_pr=True, revision="my-branch")` when creating PR on not-owned repo (#32094)

Fix create_pr aagainst existing revision

* Bump aiohttp from 3.9.4 to 3.10.2 in /examples/research_projects/decision_transformer (#32569)

Bump aiohttp in /examples/research_projects/decision_transformer

Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.9.4 to 3.10.2.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.9.4...v3.10.2)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump torch from 1.13.1 to 2.2.0 in /examples/research_projects/visual_bert (#32220)

Bump torch in /examples/research_projects/visual_bert

Bumps [torch](https://github.com/pytorch/pytorch) from 1.13.1 to 2.2.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/compare/v1.13.1...v2.2.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Cleanup tool calling documentation and rename doc (#32337)

* Rename "Templates for Chat Models" doc to "Chat Templates"

* Small formatting fix

* Small formatting fix

* Small formatting fix

* Cleanup tool calling docs as well

* Remove unneeded 'revision'

* Move tip to below main code example

* Little bonus section on template editing

* 🌐 [i18n-KO] Translated `deepspeed.md` to Korean (#32431)

* Update _toctree.yml

* docs: ko: deepspeed.md

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Update docs/source/ko/_toctree.yml

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/ko/deepspeed.md

* Update docs/source/ko/deepspeed.md

Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* Apply suggestions from code review

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>

* Update docs/source/ko/_toctree.yml

---------

Co-authored-by: wony617 <49024958+Jwaminju@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: SeungAhSon <gongsoonyee@gmail.com>

* 🌐 [i18n-KO] Translated `awq.md`to Korean (#32324)

* fix: manual edits

* Apply suggestions from code review

Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
Co-authored-by: Chulhwa (Evan) Han <cjfghk5697@ajou.ac.kr>

* fix:manual edits

- 잘못된 경로에 번역본 파일을 생성해서 옮김

* Delete docs/source/ko/tasks/awq.md

* Update docs/source/ko/_toctree.yml

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
Co-authored-by: Chulhwa (Evan) Han <cjfghk5697@ajou.ac.kr>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* fix: Fixed failing `test_find_base_model_checkpoint` (#32638)

Fixed failing test_find_base_model_checkpoint.

* Bump tensorflow from 2.11.1 to 2.12.1 in /examples/research_projects/decision_transformer (#32341)

Bump tensorflow in /examples/research_projects/decision_transformer

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.11.1 to 2.12.1.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/compare/v2.11.1...v2.12.1)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* "to be not" -> "not to be" (#32636)

* "to be not" -> "not to be"

* Update sam.md

* Update trainer.py

* Update modeling_utils.py

* Update test_modeling_utils.py

* Update test_modeling_utils.py

* fix: Updated the `is_torch_mps_available()` function to include `min_version` argument (#32545)

* Fixed wrong argument in is_torch_mps_available() function call.

* Fixed wrong argument in is_torch_mps_available() function call.

* sorted the import.

* Fixed wrong argument in is_torch_mps_available() function call.

* Fixed wrong argument in is_torch_mps_available() function call.

* Update src/transformers/utils/import_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* removed extra space.

* Added type hint for the min_version parameter.

* Added missing import.

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Expand inputs in processors for VLMs (#30962)

* let it be

* draft

* should not have changed

* add warnings

* fix & add tests

* fix tests

* ipnuts embeds cannot be passed with pixels

* more updates

* paligemma ready!

* minor typos

* update blip-2

* fix tests & raise error

* docstring

* add blip2 test

* tmp

* add image seq length to config

* update docstring

* delete

* fix tests

* fix blip

* fix paligemma

* out-of-place scatter

* add llava-next-video

* Update src/transformers/models/blip_2/modeling_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* remove tmp

* codestyle

* nits

* more nits

* remove overriding in tests

* comprehension when merging video

* fix-copies

* revert changes for embeds test

* fix tests after making comprehension

* Update src/transformers/models/blip_2/processing_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* Update src/transformers/models/blip_2/processing_blip_2.py

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* more updates

* fix tests

---------

Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>

* Automatically add `transformers` tag to the modelcard (#32623)

* Automatically add `transformers` tag to the modelcard

* Specify library_name and test

* Fix tests (#32649)

* skip failing tests

* [no-filter]

* [no-filter]

* fix wording catch in FA2 test

* [no-filter]

* trigger normal CI without filtering

* fix tensors on different devices in `WhisperGenerationMixin` (#32316)

* fix

* enable on xpu

* no manual remove

* move to device

* remove to

* add move to

* Add support for GrokAdamW optimizer (#32521)

* add grokadamw

* reformat

* code review feedback, unit test

* reformat

* reformat

* Add Depth Anything V2 Metric models (#32126)

* add checkpoint and repo names

* adapt head to support metric depth estimation

* add max_depth output scaling

* add expected logits

* improve docs

* fix docstring

* add checkpoint and repo names

* adapt head to support metric depth estimation

* add max_depth output scaling

* add expected logits

* improve docs

* fix docstring

* rename depth_estimation to depth_estimation_type

* add integration test

* Refactored tests to include metric depth model inference test
* Integration test pass when the timm backbone lines are commented (L220-L227)

* address feedback

* replace model path to use organization path

* formatting

* delete deprecated TODO

* address feedback

* [run_slow] depth_anything

* Fix: Fixed directory path for utils folder in `test_tokenization_utils.py` (#32601)

* Removed un-necessary expressions.

* Fixed directory path for utils folder in test_tokenization_utils.py

* Modify ProcessorTesterMixin for better generalization (#32637)

* Add padding="max_length" to tokenizer kwargs and change crop_size to size for image_processor kwargs

* remove crop_size argument in align processor tests to be coherent with base tests

* Add pad_token when loading tokenizer if needed, change test override tokenizer kwargs, remove unnecessary test overwrites in grounding dino

* TF_Deberta supporting mixed precision (#32618)

* Update modeling_tf_deberta.py

Corrected some codes which do not support mixed precision

* Update modeling_tf_deberta_v2.py

Corrected some codes which do not support mixed precision

* Update modeling_tf_deberta_v2.py

* Update modeling_tf_deberta.py

* Add files via upload

* Add files via upload

* Fix tests recurrent (#32651)

* add fix for recurrentgemma

* [no-filter]

* trigger-ci

* [no-filter]

* [no-filter]

* attempt to fix mysterious zip error

* [no-filter]

* fix lookup error

* [no-filter]

* remove summarization hack

* [no-filter]

* Support MUSA (Moore Threads GPU) backend in transformers (#31913)

Add accelerate version check, needs accelerate>=0.33.0

* fix: Fixed failing tests in `tests/utils/test_add_new_model_like.py` (#32678)

* Fixed failing tests in tests/utils/test_add_new_model_like.py

* Fixed formatting using ruff.

* Small nit.

* Update translation docs review (#32662)

update list of people to tag

* Add TorchAOHfQuantizer (#32306)

* Add TorchAOHfQuantizer

Summary:
Enable loading torchao quantized model in huggingface.

Test Plan:
local test

Reviewers:

Subscribers:

Tasks:

Tags:

* Fix a few issues

* style

* Added tests and addressed some comments about dtype conversion

* fix torch_dtype warning message

* fix tests

* style

* TorchAOConfig -> TorchAoConfig

* enable offload + fix memory with multi-gpu

* update torchao version requirement to 0.4.0

* better comments

* add torch.compile to torchao README, add perf number link

---------

Co-authored-by: Marc Sun <marc@huggingface.co>

* Fix `JetMoeIntegrationTest` (#32332)

JetMoeIntegrationTest

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>

* Update the distributed CPU training on Kubernetes documentation (#32669)

* Update the Kubernetes CPU training example

* Add namespace arg

Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>

---------

Signed-off-by: Dina Suehiro Jones <dina.s.jones@intel.com>

* fix: Fixed unknown pytest config option `doctest_glob` (#32475)

Fixed unknown config option doctest_glob.

* Unpin deepspeed in Docker image/tests (#32572)

Unpin deepspeed

* Updated workflows to the latest versions (#32405)

Updated few workflows to the latest versions.

* reopen: llava-next fails to consider padding_side during Training (#32679)

restore #32386

* fix: Corrected ` falcon-mamba-7b` model checkpoint name (#32837)

Corrected the model checkpoint.

* fix: update doc link for runhouse in README.md (#32664)

* VLMs: small clean-up for cache class (#32417)

* fix beam search in video llava

* [run-slow] video_llava

* add back the position ids (#32554)

* add back the position ids

* fix failing test

* Use head_dim if in config for RoPE (#32495)

* use head_dim if in config for RoPE

* typo

* simplify with getattr

* Generate: unify `LogitsWarper` and `LogitsProcessor` (#32626)

* [tests] make test_sdpa_equivalence device-agnostic (#32520)

* fix on xpu

* [run_all]

* Cache: use `batch_size` instead of `max_batch_size` (#32657)

* more precise name

* better docstrings

* Update src/transformers/cache_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Fix AutoConfig and AutoModel support for Llava-Next-Video (#32844)

* Fix: fix all model_type of Llava-Next-Video to llava_next_video

* Fix doc for llava_next_video

* * Fix formatting issues
* Change llava-next-video.md file name into llava_next_video.md to make it compatible with implementation

* Fix docs TOC for llava-next-video

* improve _get_is_as_tensor_fns (#32596)

* improve _get_is_as_tensor_fns

* format

* Revert PR 32299, flag users when Zero-3 was missed (#32851)

Revert PR 32299

* fix multi-gpu with static cache (#32543)

* Reduce the error log when using core models that need their weights renamed, and provide a step forward (#32656)

* Fin

* Modify msg

* Finish up nits

* Make beam_constraints.Constraint.advance() docstring more accurate (#32674)

* Fix beam_constraints.Constraint.advance() docstring

* Update src/transformers/generation/beam_constraints.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* generate: missing `to` in DoLa body, causing exceptions in multi-gpu generation (#32856)

* Add Flax Dinov2 (#31960)

* tfmsenv restored in main

* installed flax

* forward pass done and all tests passed

* make fix-copies and cleaning the scripts

* fixup attempt 1

* fixup attempt 2

* fixup third attempt

* fixup attempt 4

* fixup attempt 5

* dinov2 doc fixed

* FlaxDinov2Model + ForImageClassification added to OBJECTS_TO_IGNORE

* external pos_encoding layer removed

* fixup attempt 6

* fixed integration test values

* fixup attempt 7

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/dinov2/modeling_flax_dinov2.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/tran…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants