Skip to content

[Potential Bug] Normalize real input in awgn. #21

Open
@datlife

Description

@datlife

Hi @veeresht again,

According to Wikipedia on SNR and dsp.stackexchange, it seems that we might need to normalize real inputs to be zero-mean as: inputs = 2.0 * inputs - 1.0, when adding White Gaussian Noise to signals.

I wonder if this is a potential bug. I ran two tests (decoding convolutional codes over AWGN Channel using Viterbi) to validate my speculation:

  • First test: Original implementation of awgn
  • Second test : Modified version as
 def awgn(input_signal, snr_dB, rate=1.0):
    avg_energy = sum(abs(input_signal) * abs(input_signal))/len(input_signal)
    snr_linear = 10**(snr_dB/10.0)
    noise_variance = avg_energy/(2*rate*snr_linear)
    if isinstance(input_signal[0], complex):
        noise = (sqrt(noise_variance) * randn(len(input_signal))) + (sqrt(noise_variance) * randn(len(input_signal))*1j)
    else:
        noise = sqrt(2*noise_variance) * randn(len(input_signal))
        input_signal = 2.0 * input_signal - 1.0  # Zero-mean normalization HERE

    output_signal = input_signal + noise
    return output_signal

Test file:

import multiprocessing as mp
import numpy as np
import commpy as cp
import matplotlib.pyplot as plt
import commpy.channelcoding.convcode as cc

from commpy.channelcoding import Trellis

# For reproducability
np.random.seed(2018)

NUM_EXAMPLES = 50
SNRs = np.arange(-1.0, 5.0, 1)
BLOCK_LEN = 100

M = np.array([2])
G =  np.array([[0o7, 0o5]])
TB_DEPTH = 15
trellis = Trellis(M, G, feedback=7)

def simulation_func(snr):
    # Generate random message bits to be encoded
    message_bits = np.random.randint(0, 2, BLOCK_LEN)

    # Encode message bits
    coded_bits = cc.conv_encode(message_bits, trellis)

    # Add Additive White Gaussian noise
    noisy_signal = cp.channels.awgn(coded_bits, snr, rate=1/2)
    
    # Estimate original message bit using Viterbit
    decoded_bits = cc.viterbi_decode(noisy_signal, trellis, TB_DEPTH, decoding_type='unquantized')     
    
    num_bit_errors = cp.utilities.hamming_dist(message_bits, decoded_bits[:-int(M)])
    return num_bit_errors


if __name__ == '__main__':
    BERs, BLERs =  [], []
    for snr in SNRs:
        # Test 100 examples
        with mp.Pool(2) as pool:
            hamming_dists = pool.map(simulation_func, [(snr) for _ in range(NUM_EXAMPLES)])
        # Save result
        BERs.append(np.sum(hamming_dists)/ (BLOCK_LEN * NUM_EXAMPLES))
        BLERs.append(np.count_nonzero(hamming_dists)/NUM_EXAMPLES)
        print('SNR = %d BER: %.2f BLER: %.2f'% (snr, BERs[-1], BLERs[-1]))

    # Plot the result
    fig, (left, right) = plt.subplots(1, 2, figsize=(10, 5))
    left.plot(SNRs, BERs)
    left.set_ylabel('Bit Error Rate (BER)')
    left.set_xlabel('Signal-to-Noise Ratio (SNR in dB)')
    left.semilogy()
    left.grid(True, which='both')
    right.plot(SNRs, BLERs)
    right.set_ylabel('Bit Block Error Rate (BLER)')
    right.set_xlabel('Signal-to-Noise Ratio (SNR in dB)')
    right.semilogy()
    right.grid(True, which='both')
    fig.tight_layout()
    plt.show()

BEFORE (First Test)

figure_1

SNR = -1 BER: 0.36 BLER: 1.00
SNR = 0 BER: 0.34 BLER: 1.00
SNR = 1 BER: 0.33 BLER: 1.00
SNR = 2 BER: 0.31 BLER: 1.00
SNR = 3 BER: 0.30 BLER: 1.00
SNR = 4 BER: 0.28 BLER: 1.00

AFTER (Second Test)

figure_2

SNR = -1 BER: 0.13 BLER: 1.00
SNR = 0 BER: 0.07 BLER: 0.96
SNR = 1 BER: 0.04 BLER: 0.66
SNR = 2 BER: 0.02 BLER: 0.44
SNR = 3 BER: 0.00 BLER: 0.18
SNR = 4 BER: 0.00 BLER: 0.12

I have not submitted my pull request yet because I am not sure. What you you think?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions