-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
MRG, BUG: Fix bugs with envcorr #8658
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -13,7 +13,7 @@ | |
|
|
||
| @verbose | ||
| def envelope_correlation(data, combine='mean', orthogonalize="pairwise", | ||
| verbose=None): | ||
| log=False, absolute=True, verbose=None): | ||
| """Compute the envelope correlation. | ||
|
|
||
| Parameters | ||
|
|
@@ -40,6 +40,18 @@ def envelope_correlation(data, combine='mean', orthogonalize="pairwise", | |
| absolute values. | ||
|
|
||
| .. versionadded:: 0.19 | ||
| log : bool | ||
| If True (default False), square and take the log before orthonalizing | ||
| envelopes or computing correlations. | ||
|
|
||
| .. versionadded:: 0.22 | ||
| absolute : bool | ||
| If True (default), then take the absolute value of correlation | ||
| coefficients before making each epoch's correlation matrix | ||
| symmetric (and thus before combining matrices across epochs). | ||
| Only used when ``orthogonalize=True``. | ||
|
|
||
| .. versionadded:: 0.22 | ||
| %(verbose)s | ||
|
|
||
| Returns | ||
|
|
@@ -54,6 +66,10 @@ def envelope_correlation(data, combine='mean', orthogonalize="pairwise", | |
| This function computes the power envelope correlation between | ||
| orthogonalized signals [1]_ [2]_. | ||
|
|
||
| .. versionchanged:: 0.22 | ||
| Computations fixed for ``orthogonalize=True`` and diagonal entries are | ||
| set explicitly to zero. | ||
|
|
||
| References | ||
| ---------- | ||
| .. [1] Hipp JF, Hawellek DJ, Corbetta M, Siegel M, Engel AK (2012) | ||
|
|
@@ -99,6 +115,9 @@ def envelope_correlation(data, combine='mean', orthogonalize="pairwise", | |
| data_mag = np.abs(epoch_data) | ||
| data_conj_scaled = epoch_data.conj() | ||
| data_conj_scaled /= data_mag | ||
| if log: | ||
| data_mag *= data_mag | ||
| np.log(data_mag, out=data_mag) | ||
| # subtract means | ||
| data_mag_nomean = data_mag - np.mean(data_mag, axis=-1, keepdims=True) | ||
| # compute variances using linalg.norm (square, sum, sqrt) since mean=0 | ||
|
|
@@ -107,21 +126,29 @@ def envelope_correlation(data, combine='mean', orthogonalize="pairwise", | |
| corr = np.empty((n_nodes, n_nodes)) | ||
| for li, label_data in enumerate(epoch_data): | ||
| if orthogonalize is False: # the new code | ||
| label_data_orth = data_mag | ||
| label_data_orth_std = data_mag_std | ||
| label_data_orth = data_mag[li] | ||
| label_data_orth_std = data_mag_std[li] | ||
| else: | ||
| label_data_orth = (label_data * data_conj_scaled).imag | ||
| np.abs(label_data_orth, out=label_data_orth) | ||
| # protect against invalid value -- this will be zero | ||
| # after (log and) mean subtraction | ||
| label_data_orth[li] = 1. | ||
| if log: | ||
| label_data_orth *= label_data_orth | ||
| np.log(label_data_orth, out=label_data_orth) | ||
| label_data_orth -= np.mean(label_data_orth, axis=-1, | ||
| keepdims=True) | ||
| label_data_orth_std = np.linalg.norm(label_data_orth, axis=-1) | ||
| label_data_orth_std[label_data_orth_std == 0] = 1 | ||
| # correlation is dot product divided by variances | ||
| corr[li] = np.dot(label_data_orth, data_mag_nomean[li]) | ||
| corr[li] /= data_mag_std[li] | ||
| corr[li] = np.sum(label_data_orth * data_mag_nomean, axis=1) | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| corr[li] /= data_mag_std | ||
| corr[li] /= label_data_orth_std | ||
| if orthogonalize is not False: | ||
| # Make it symmetric (it isn't at this point) | ||
| corr = np.abs(corr) | ||
| if absolute: | ||
| corr = np.abs(corr) | ||
| corr = (corr.T + corr) / 2. | ||
| corrs.append(corr) | ||
| del corr | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -15,19 +15,23 @@ | |
| def _compute_corrs_orig(data): | ||
| # This is the version of the code by Sheraz and Denis. | ||
| # For this version (epochs, labels, time) must be -> (labels, time, epochs) | ||
| data = np.transpose(data, (1, 2, 0)) | ||
| corr_mats = np.empty((data.shape[0], data.shape[0], data.shape[2])) | ||
| for index, label_data in enumerate(data): | ||
| label_data_orth = np.imag(label_data * (data.conj() / np.abs(data))) | ||
| label_data_orig = np.abs(label_data) | ||
| label_data_cont = np.transpose( | ||
| np.dstack((label_data_orig, np.transpose(label_data_orth, | ||
| (1, 2, 0)))), (1, 2, 0)) | ||
| corr_mats[index] = np.array([np.corrcoef(dat) | ||
| for dat in label_data_cont])[:, 0, 1:].T | ||
| corr_mats = np.transpose(corr_mats, (2, 0, 1)) | ||
| corr = np.mean(np.array([(np.abs(corr_mat) + np.abs(corr_mat).T) / 2. | ||
| for corr_mat in corr_mats]), axis=0) | ||
|
Comment on lines
16
to
-30
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
@dengemann IIRC the code here was the prototype that I used to code a more efficient version in MNE, so it should have replicated it properly. Unfortunately I had to change this prototype code to do something different in order to get it to follow FieldTrip and the paper definitions...
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No problem, If we can come up with a more flexible set of options this is good for research on this method.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I second, differences will be subtle and additional options will be worth comparing. |
||
| n_epochs, n_labels, _ = data.shape | ||
| corr = np.zeros((n_labels, n_labels)) | ||
| for epoch_data in data: | ||
| for ii in range(n_labels): | ||
| for jj in range(n_labels): | ||
| # Get timeseries for each pair | ||
| x, y = epoch_data[ii], epoch_data[jj] | ||
| x_mag = np.abs(x) | ||
| x_conj_scaled = x.conj() | ||
| x_conj_scaled /= x_mag | ||
| # Calculate orthogonalization | ||
| y_orth_x = (y * x_conj_scaled).imag | ||
| y_orth_x_mag = np.abs(y_orth_x) | ||
| # Estimate correlation | ||
| corr[ii, jj] += np.abs(np.corrcoef(x_mag, y_orth_x_mag)[0, 1]) | ||
larsoner marked this conversation as resolved.
Show resolved
Hide resolved
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @SherazKhan the actual critical change in the code below is from It's probably easier to see if you look at the "simplified" version here. The old/ Instead of this correlation, with (magnitude) of the
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @larsoner indeed, this is more closer to the original Hipp version. |
||
| corr = (corr + corr.T) / (2. * n_epochs) | ||
| corr.flat[::n_labels + 1] = 0. | ||
| return corr | ||
|
|
||
|
|
||
|
|
@@ -37,7 +41,7 @@ def test_envelope_correlation(): | |
| data = rng.randn(2, 4, 64) | ||
| data_hilbert = hilbert(data, axis=-1) | ||
| corr_orig = _compute_corrs_orig(data_hilbert) | ||
| assert (0 < corr_orig).all() | ||
| assert (0 <= corr_orig).all() | ||
| assert (corr_orig < 1).all() | ||
| # using complex data | ||
| corr = envelope_correlation(data_hilbert) | ||
|
|
@@ -72,3 +76,25 @@ def test_envelope_correlation(): | |
| assert_allclose(np.diag(corr_plain_mean), 1) | ||
| np_corr = np.array([np.corrcoef(np.abs(x)) for x in data_hilbert]) | ||
| assert_allclose(corr_plain, np_corr) | ||
|
|
||
| # check against FieldTrip, which uses the square-log-norm version | ||
| # from scipy.io import savemat | ||
| # savemat('data.mat', dict(data_hilbert=data_hilbert)) | ||
| # matlab | ||
| # load data | ||
| # ft_connectivity_powcorr_ortho(reshape(data_hilbert(1,:,:), [4, 64])) | ||
| # ft_connectivity_powcorr_ortho(reshape(data_hilbert(2,:,:), [4, 64])) | ||
| ft_vals = np.array([ | ||
| [[np.nan, 0.196734553900236, 0.063173148355451, -0.242638384630448], | ||
| [0.196734553900236, np.nan, 0.041799775495150, -0.088205187548542], | ||
| [0.063173148355451, 0.041799775495150, np.nan, 0.090331428512317], | ||
| [-0.242638384630448, -0.088205187548542, 0.090331428512317, np.nan]], | ||
| [[np.nan, -0.013270857462890, 0.185200598081295, 0.140284351572544], | ||
| [-0.013270857462890, np.nan, 0.150981508043722, -0.000671809276372], | ||
| [0.185200598081295, 0.150981508043722, np.nan, 0.137460244313337], | ||
| [0.140284351572544, -0.000671809276372, 0.137460244313337, np.nan]], | ||
| ], float) | ||
| ft_vals[np.isnan(ft_vals)] = 0 | ||
| corr_log = envelope_correlation( | ||
| data, combine=None, log=True, absolute=False) | ||
| assert_allclose(corr_log, ft_vals) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First change: when orthogonalizing, use
absof the orthogonalized data rather than just(data * data_conj_scaled).imag). Not 100% sure this is right but conceptually the thing we're correlating with isabs(data)so it seems to make sense to take theabshere, too...I think the original operated on
np.log10(label_data_orth ** 2)so anabswas implicitly taken by the squaring operation.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about the log10, but they do square and take the log-transform:
Citation from their supplementary information:
"The signal Y(t,f) is orthogonalized in the complex plane with respect to X(t,f). This results in a positive number |Y_orth_X(t,f)| which is then squared and log-transformed, and correlated with [|X(t,f)|." Source
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log10 is the same as log up to a scale factor, and that scale factor goes away in the correlation so it doesn't really matter. But
np.logis faster thannp.log10so why not!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... but that text does make it clear that it should result in a positive number even before the square and log transform, so I think this is correct now (and will continue to be on the next push where I change
np.log10tonp.log)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think abs was missing in the original code, because of the squaring in the next step.