Skip to content

Conversation

@adrianSRoman
Copy link
Owner

No description provided.

@adrianSRoman adrianSRoman requested a review from Copilot August 24, 2025 01:01
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR optimizes the LAM (Learned Adaptive Model) by removing explicit loops and implementing vectorized operations for better performance. The changes replace sequential frequency band processing with batch operations that process all bands simultaneously.

Key changes:

  • Vectorized eigendecomposition and matrix operations across all frequency bands
  • Removed explicit loops in encoding and decoding phases
  • Maintained identical mathematical operations while improving computational efficiency

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +69 to +71
# Batch matrix multiply: [batch_size, freq_bands, Npx, N_ch] @ [batch_size, freq_bands, N_ch, N_ch]
# -> [batch_size, freq_bands, Npx, N_ch]
latent_x = torch.matmul(D_conj_T.unsqueeze(0), Vs)
Copy link

Copilot AI Aug 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dimension mismatch in matrix multiplication. D_conj_T has shape [num_bands, Npx, Nch] and after unsqueeze becomes [1, num_bands, Npx, Nch], but Vs has shape [batch_size, freq_bands, N_ch, N_ch]. The matrix multiplication requires the inner dimensions to match, but here we have Nch vs N_ch which may not be aligned, and the frequency band dimensions need proper broadcasting.

Suggested change
# Batch matrix multiply: [batch_size, freq_bands, Npx, N_ch] @ [batch_size, freq_bands, N_ch, N_ch]
# -> [batch_size, freq_bands, Npx, N_ch]
latent_x = torch.matmul(D_conj_T.unsqueeze(0), Vs)
# Expand D_conj_T to match batch size for broadcasting
# D_conj_T: [num_bands, Npx, Nch] -> [1, freq_bands, Npx, Nch] -> [batch_size, freq_bands, Npx, Nch]
D_conj_T_expanded = D_conj_T.unsqueeze(0).expand(batch_size, -1, -1, -1)
latent_x = torch.matmul(D_conj_T_expanded, Vs)

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants