Skip to content

Conversation

smeshk
Copy link
Collaborator

@smeshk smeshk commented Jun 25, 2025

Description

This pull request introduces fundamental building blocks for learned communication systems by adding differentiable Quadrature Amplitude Modulation (QAM) and Demodulation capabilities to the library.

Currently, our toolkit lacks differentiable physical layer components, which prevents the end-to-end training of neural networks that include a communication channel. These changes enable researchers and engineers to build and train models like neural autoencoders for the physical layer.

The two commits in this PR achieve the following:

  • Differentiable Gray-coded QAM Modulation (45364d5): A new function is added to map input bits to complex-valued QAM symbols. It uses Gray coding to ensure that adjacent symbols in the constellation differ by only a single bit, which is crucial for minimizing the bit error rate (BER) during demodulation.

  • Differentiable Exact & Approximate QAM Demodulation (3b5ee5e): Two demodulation methods are introduced:

    • An exact demodulator that calculates distances to all points in the constellation assuming equal probability for all symbols. This provides a smooth, fully differentiable function suitable for backpropagation during training.

    • An approximate demodulator that uses the distance to the nearest neighbor in the constellation. While its gradient might be less smooth (due to the min operation), it is useful for fast demodulation for higher order constellations.

Type of Change

Please mark the relevant option:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Performance improvement
  • Code refactoring (no functional changes)
  • CI/CD related changes

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

Additional Information

Any additional information, screenshots, or context that would help reviewers understand the changes.

selimfirat and others added 28 commits June 15, 2025 22:08
Add new image compressors: JPEG2000, JPEG XL, and WebP

- Implemented JPEG2000Compressor with configurable quality and lossless options.
- Implemented JPEGXLCompressor with support for lossless compression and adjustable effort.
- Implemented WebPCompressor with options for quality, lossless mode, and compression method.
- Updated __init__.py to include new compressors in the module.
- Added unit tests for JPEG2000, JPEG XL, and WebP compressors to ensure functionality.
- Updated test script to include tests for new compressors.
- Updated requirements to include necessary dependencies for new compressors.
- Introduced SampleDataLoader class to unify loading of dataset and standard test images.
- Removed the old load_sample_images function and integrated its functionality into the new class.
- Updated example scripts and tests to utilize the new SampleDataLoader interface.
- Added methods for downloading and caching standard test images.
- Enhanced error handling and added support for resizing images during loading.
- Removed the deprecated download_example_images.py script.
- Added ModelConfig class for unified model configuration management.
- Updated BaseModel to support configuration objects (PretrainedConfig, DictConfig, dict).
- Introduced KairaTrainer for unified training of communication models with flexible argument handling.
- Implemented TrainingArguments class to support Hydra and dict-based configurations.
- Added tests for new dataset loading functionalities and improved error handling in image downloading.
- Updated requirements to include necessary libraries for new features.
…asses; enhance documentation and error handling in sample data tests
- Introduced test_init.py to validate imports and exports in kaira.data module.
- Enhanced test_sample_data.py to include tests for download_image function.
- Created test_training.py with extensive coverage for Trainer and TrainingArguments classes, including various initialization scenarios and configuration handling.
- Added tests for extracting configuration values and ensuring proper parameter filtering in TrainingArgumentsMixin.
- Verified integration between different components of the training module.
- Updated `kaira_train.py` to streamline model loading and configuration parsing.
- Removed unnecessary parameters from `load_model_from_config` function.
- Simplified configuration loading logic using Hydra and OmegaConf.
- Enhanced `create_training_arguments_from_args` to support direct CLI argument parsing.
- Removed extensive test cases for training arguments and trainer due to redundancy.
- Ensured all tests are aligned with the new configuration handling approach.
- Updated kaira_train.py to streamline model loading and configuration parsing
- Removed unnecessary parameters from load_model_from_config function
- Simplified configuration loading logic using Hydra and OmegaConf
- Enhanced create_training_arguments_from_args to support direct CLI argument parsing
- Removed extensive test cases for training arguments and trainer due to redundancy
- Ensured all tests are aligned with the new configuration handling approach

(Cherry-picked from 8f2293c with conflict resolution)
@Copilot Copilot AI review requested due to automatic review settings June 25, 2025 12:59
@smeshk smeshk requested a review from selimfirat as a code owner June 25, 2025 12:59
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds differentiable Gray-coded QAM modulation and demodulation capabilities to enable end-to-end learnable communication channels. Key changes include:

  • Introducing a differentiable parameter in the QAM modulator with corresponding behavior changes.
  • Adding an approximate demodulation mode with a new LLR calculation branch.
  • Refactoring squared distance computations for LLR calculations in the demodulator.

# Find minimum distance across all points
min_distances, _ = torch.min(squared_distances, dim=-1)
return min_distances
return squared_distances
Copy link
Preview

Copilot AI Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the _min_squared_distance function, the torch.min reduction is removed, causing the function to return the full squared distances instead of the minimum. This change breaks the expected behavior for LLR calculations; please ensure that _min_squared_distance returns the minimum squared distances as originally intended.

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants