Skip to content

[Submission] Cautious NAdamW jax #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

kyleliang919
Copy link

@kyleliang919 kyleliang919 commented Jun 20, 2025

Cautious NAdamW jax

Submission Information

submission_name: "Cautious_NAdamW"
submission_folder: "submissions/external_tuning/cautious_nadamw"
authors: "Kaizhao Liang"
affiliations: "University of Texas at Austin"
version: "1.0"
ruleset: "self-tuning"
framework: "JAX"
description: "Cautious NadamW ([Liang, 2025](https://arxiv.org/abs/2411.16085))."

Evidence for the Submission's Performance

Paper:
https://huggingface.co/papers/2411.16085
Results on RL: https://x.com/KyleLiang5/status/1931344549302927444

Independent verification:
https://huggingface.co/rwightman/timm-optim-caution
https://x.com/_clashluke/status/1935961388553290108

Comments

Finger crossed

@kyleliang919 kyleliang919 requested a review from a team as a code owner June 20, 2025 22:35
Copy link

github-actions bot commented Jun 20, 2025

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@kyleliang919
Copy link
Author

recheck

1 similar comment
@kyleliang919
Copy link
Author

recheck

@fsschneider
Copy link
Contributor

fsschneider commented Jun 25, 2025

Hi!

Thanks for your submission. We are very interested in benchmarking Cautious optimizers.
Since we are currently focusing our efforts on strengthening the self-tuning leaderboard, would you be interested in also submitting a self-tuning version of Cautious NAdamW? We do have a self-tuning NAdamW baseline you could use as a starting point.

@kyleliang919
Copy link
Author

recheck

@kyleliang919
Copy link
Author

kyleliang919 commented Jun 25, 2025

Hi!

Thanks for your submission. We are very interested in benchmarking Cautious optimizers. Since we are currently focusing our efforts on strengthening the self-tuning leaderboard, would you be interested in also submitting a self-tuning version of Cautious NAdamW? We do have a self-tuning NAdamW baseline you could use as a starting point.

I just copied the implementation over to self-tuning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants