Cast A_log to float32 before exp in Mamba2Simple.forward#923
Open
Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Open
Cast A_log to float32 before exp in Mamba2Simple.forward#923Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Conversation
Mamba2Simple stores A_log as nn.Parameter(torch.log(A).to(dtype=dtype))
— i.e., in the model's configured dtype (often bf16/fp16). The forward
pass then does A = -torch.exp(self.A_log) directly, so the log/exp
round-trip runs in low precision and feeds a quantised A into the SSD
kernels.
The non-simple Mamba2 module (mamba_ssm/modules/mamba2.py) is the
reference implementation for the same parameter and explicitly upcasts
before exp in both the forward and step paths:
A = -torch.exp(self.A_log.float()) # (nheads) or (d_inner, d_state)
mamba_simple.py follows the same convention. Mamba2Simple appears to
be a direct reduction of mamba2.py and just missed the .float() cast.
Match the reference behaviour so Mamba2Simple produces numerically
consistent A values in mixed-precision runs.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bug
Mamba2Simple.forwardinmamba_ssm/modules/mamba2_simple.pycomputesAdirectly fromself.A_logwithout promoting to float32:A_logis registered in__init__asso it lives in the model's configured dtype (commonly
bf16orfp16). Theexpis therefore executed in low precision, and the quantisedAis handed to the SSD kernels (mamba_split_conv1d_scan_combined/mamba_chunk_scan_combined).Root cause
The two sibling modules — which share the exact same
A_logstorage convention — both upcast beforeexp:mamba_ssm/modules/mamba2.py, forward (line 182):A = -torch.exp(self.A_log.float())mamba_ssm/modules/mamba2.py, step (line 307):A = -torch.exp(self.A_log.float())mamba_ssm/modules/mamba_simple.py, forward (line 143) and step (line 235):A = -torch.exp(self.A_log.float())Given that
Mamba2Simpleis a trimmed-down version ofMamba2, the missing.float()here is an oversight when the simpler variant was factored out.Fix
Add the
.float()cast soexpruns in fp32, matching the reference behaviour:Why the fix is correct
Mamba2.forward,Mamba2.step, and bothMambapaths — soMamba2Simplenow produces numerically consistentAvalues with the non-simple path under mixed-precision.A_log._no_weight_decay = Trueand gradient flow are unaffected:.float()creates an upcast view in the forward graph; autograd still propagates gradients back intoself.A_log.Athe entire time).