Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.2.7 #901

Merged
merged 37 commits into from
Nov 6, 2023
Merged

0.2.7 #901

Changes from 1 commit
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
00f73cc
adjust delay
DominiqueMakowski Aug 29, 2023
1570a10
change coarsegraining
DominiqueMakowski Aug 30, 2023
a12ce58
entropy_sample: return phi in info dict
DominiqueMakowski Aug 31, 2023
820d0d9
return more info in entropy_sample
DominiqueMakowski Aug 31, 2023
487407d
Update utils_complexity_coarsegraining.py
DominiqueMakowski Sep 4, 2023
5768e51
0.2.7
DominiqueMakowski Sep 12, 2023
c5e95d3
Merge branch 'dev' into fix_entropymultiscale
DominiqueMakowski Sep 13, 2023
6fbef91
rename .utils to .utils_entropy
DominiqueMakowski Sep 15, 2023
aea7de4
complexity_decorrelation(): add 'show' argument
DominiqueMakowski Sep 16, 2023
731405c
linting
DominiqueMakowski Sep 16, 2023
41b923a
move entropy_approximate internals and add entropy_quadratic
DominiqueMakowski Sep 16, 2023
ec67af6
add to docs
DominiqueMakowski Sep 16, 2023
19d5487
Update entropy_svd.py
DominiqueMakowski Sep 16, 2023
f4b63f0
add makowski method LLE
DominiqueMakowski Sep 21, 2023
9af817e
Update complexity_lyapunov.py
DominiqueMakowski Sep 21, 2023
067f1cc
fix scikit-learn #910
DominiqueMakowski Sep 21, 2023
a7382fd
Update entropy_quadratic.py
DominiqueMakowski Sep 21, 2023
e424839
Merge pull request #892 from neuropsychology/fix_entropymultiscale
DominiqueMakowski Sep 22, 2023
763e419
Merge branch 'dev' into lyapunov
DominiqueMakowski Sep 22, 2023
97ef675
fix test
DominiqueMakowski Sep 22, 2023
9dffa6f
minor docs
DominiqueMakowski Sep 27, 2023
215f2fa
Update eda_plot.py
mperreir Oct 2, 2023
b05a700
Update eda_intervalrelated.py
mperreir Oct 2, 2023
379d219
Merge pull request #916 from mperreir/fix_plot_eda
DominiqueMakowski Oct 2, 2023
44959c6
Merge pull request #906 from neuropsychology/lyapunov
DominiqueMakowski Oct 2, 2023
beea596
Merge branch 'dev' into pr/917
DominiqueMakowski Oct 2, 2023
5a1d2ad
Update eda_intervalrelated.py
mperreir Oct 3, 2023
6d611c9
Merge branch 'fix_eda_intervalrelated' of https://github.com/mperreir…
mperreir Oct 3, 2023
984194a
minor renaming
DominiqueMakowski Oct 3, 2023
484dd7f
Merge pull request #917 from mperreir/fix_eda_intervalrelated
DominiqueMakowski Oct 9, 2023
d7120b4
comment off intervals_process example
DominiqueMakowski Oct 9, 2023
4cd82c6
add new detector
DominiqueMakowski Oct 24, 2023
b4a57f2
Update read_xdf.py
DominiqueMakowski Oct 24, 2023
323929f
Merge pull request #920 from neuropsychology/ecg_peak_manikandan
DominiqueMakowski Oct 25, 2023
7ab0277
fixed sklearn sanity checks for valid_metrics
stamate Nov 2, 2023
76a1efa
Merge branch 'dev' into pr/925
DominiqueMakowski Nov 2, 2023
c1e38e1
Merge pull request #925 from stamate/bugfix-sklearn_version_valid_met…
DominiqueMakowski Nov 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add makowski method LLE
  • Loading branch information
DominiqueMakowski committed Sep 21, 2023
commit f4b63f02879fe953092dad808a015be78483a8bf
154 changes: 123 additions & 31 deletions neurokit2/complexity/complexity_lyapunov.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,11 @@
import numpy as np
import pandas as pd
import sklearn.metrics.pairwise
import sklearn.neighbors

from ..misc import NeuroKitWarning
from ..misc import NeuroKitWarning, find_knee
from ..signal.signal_psd import signal_psd
from .optim_complexity_tolerance import complexity_tolerance
from .utils_complexity_embedding import complexity_embedding


Expand All @@ -16,17 +18,15 @@ def complexity_lyapunov(
delay=1,
dimension=2,
method="rosenstein1993",
len_trajectory=20,
matrix_dim=4,
min_neighbors="default",
separation="auto",
**kwargs,
):
"""**(Largest) Lyapunov Exponent (LLE)**

Lyapunov exponents (LE) describe the rate of exponential separation (convergence or divergence)
of nearby trajectories of a dynamical system. It is a measure of sensitive dependence on
initial conditions, i.e. how quickly two nearby states diverge. A system can have multiple LEs,
equal to thenumber of the dimensionality of the phase space, and the largest LE value, "LLE" is
equal to the number of the dimensionality of the phase space, and the largest LE value, "LLE" is
often used to determine the overall predictability of the dynamical system.

Different algorithms exist to estimate these indices:
Expand All @@ -37,13 +37,17 @@ def complexity_lyapunov(
neighbouring points are then tracked along their distance trajectories for a number of data
points. The slope of the line using a least-squares fit of the mean log trajectory of the
distances gives the final LLE.
* **Eckmann et al. (1996)** computes LEs by first reconstructing the time series using a
* **Makowski** is a custom modification of Rosenstein's algorithm, using KDTree for more
efficient nearest neighbors computation. Additionally, the LLE is computed as the slope up to
the changepoint of divergence rate (the point where it flattens out), making it more robust
to the length trajectory parameter.
* **Eckmann et al. (1986)** computes LEs by first reconstructing the time series using a
delay-embedding method, and obtains the tangent that maps to the reconstructed dynamics using
a least-squares fit, where the LEs are deduced from the tangent maps.

.. warning::

The **Eckman (1996)** method currently does not work. Please help us fixing it by double
The **Eckman (1986)** method currently does not work. Please help us fixing it by double
checking the code, the paper and helping us figuring out what's wrong. Overall, we would like
to improve this function to return for instance all the exponents (Lyapunov spectrum),
implement newer and faster methods (e.g., Balcerzak, 2018, 2020), etc. If you're interested
Expand All @@ -59,17 +63,17 @@ def complexity_lyapunov(
dimension : int
Embedding Dimension (*m*, sometimes referred to as *d* or *order*). See
:func:`complexity_dimension` to estimate the optimal value for this parameter. If method
is ``"eckmann1996"``, larger values for dimension are recommended.
is ``"eckmann1986"``, larger values for dimension are recommended.
method : str
The method that defines the algorithm for computing LE. Can be one of ``"rosenstein1993"``
or ``"eckmann1996"``.
The method that defines the algorithm for computing LE. Can be one of ``"rosenstein1993"``,
``"makowski"``, or ``"eckmann1986"``.
len_trajectory : int
Applies when method is ``"rosenstein1993"``. The number of data points in which
neighboring trajectories are followed.
matrix_dim : int
Applies when method is ``"eckmann1996"``. Corresponds to the number of LEs to return.
Applies when method is ``"eckmann1986"``. Corresponds to the number of LEs to return.
min_neighbors : int, str
Applies when method is ``"eckmann1996"``. Minimum number of neighbors. If ``"default"``,
Applies when method is ``"eckmann1986"``. Minimum number of neighbors. If ``"default"``,
``min(2 * matrix_dim, matrix_dim + 4)`` is used.
**kwargs : optional
Other arguments to be passed to ``signal_psd()`` for calculating the minimum temporal
Expand All @@ -79,7 +83,7 @@ def complexity_lyapunov(
--------
lle : float
An estimate of the largest Lyapunov exponent (LLE) if method is ``"rosenstein1993"``, and
an array of LEs if ``"eckmann1996"``.
an array of LEs if ``"eckmann1986"``.
info : dict
A dictionary containing additional information regarding the parameters used
to compute LLE.
Expand All @@ -90,19 +94,24 @@ def complexity_lyapunov(

import neurokit2 as nk

signal = nk.signal_simulate(duration=3, sampling_rate=100, frequency=[5, 8], noise=0.5)
signal = nk.signal_simulate(duration=5, sampling_rate=100, frequency=[5, 8], noise=0.1)

# Rosenstein's method
@savefig p_complexity_lyapunov.png scale=100%
lle, info = nk.complexity
_lyapunov(signal, method="rosenstein1993", show=True)
@savefig p_complexity_lyapunov1.png scale=100%
lle, info = nk.complexity_lyapunov(signal, method="rosenstein", show=True)
@suppress
plt.close()

lle

# Makowski's change-point method
@savefig p_complexity_lyapunov2.png scale=100%
lle, info = nk.complexity_lyapunov(signal, method="makowski", show=True)
@suppress
plt.close()

# Eckman's method is broken. Please help us fix-it!
# lle, info = nk.complexity_lyapunov(signal, dimension=2, method="eckmann1996")
# lle, info = nk.complexity_lyapunov(signal, dimension=2, method="eckmann1986")

References
----------
Expand All @@ -126,36 +135,45 @@ def complexity_lyapunov(

# "We impose the additional constraint that nearest neighbors have a temporal separation
# greater than the mean period of the time series: This allows us to consider each pair of
# neighbors as nearby initial conditions for different trajectories.""
# neighbors as nearby initial conditions for different trajectories."

# "We estimated the mean period as the reciprocal of the mean frequency of the power spectrum,
# although we expect any comparable estimate, e.g., using the median frequency of the magnitude
# spectrum, to yield equivalent results."
if separation == "auto":
# Actual sampling rate does not matter
psd = signal_psd(
signal, sampling_rate=1000, method="fft", normalize=False, show=False
)
mean_freq = np.sum(psd["Power"] * psd["Frequency"]) / np.sum(psd["Power"])

# Actual sampling rate does not matter
psd = signal_psd(
signal, sampling_rate=1000, method="fft", normalize=False, show=False
)
mean_freq = np.sum(psd["Power"] * psd["Frequency"]) / np.sum(psd["Power"])

# 1 / mean_freq = seconds per cycle
separation = int(np.ceil(1 / mean_freq * 1000))
# 1 / mean_freq = seconds per cycle
separation = int(np.ceil(1 / mean_freq * 1000))
else:
assert isinstance(separation, int), "'separation' should be an integer."

# Run algorithm
# ----------------
# Method
method = method.lower()
if method in ["rosenstein", "rosenstein1993"]:
le, parameters = _complexity_lyapunov_rosenstein(
signal, delay, dimension, separation, len_trajectory, **kwargs
signal, delay, dimension, separation, **kwargs
)
elif method in ["makowski"]:
le, parameters = _complexity_lyapunov_makowski(
signal, delay, dimension, separation, **kwargs
)
elif method in ["eckmann", "eckmann1996"]:
elif method in ["eckmann", "eckmann1986", "eckmann1986"]:
le, parameters = _complexity_lyapunov_eckmann(
signal,
dimension=dimension,
separation=separation,
matrix_dim=matrix_dim,
min_neighbors=min_neighbors,
)
else:
raise ValueError(
"NeuroKit error: complexity_lyapunov(): 'method' should be one of "
" 'rosenstein1993', 'makowski', 'eckmann1986'."
)

# Store params
Expand All @@ -175,6 +193,80 @@ def complexity_lyapunov(
# =============================================================================


def _complexity_lyapunov_makowski(
signal,
delay=1,
dimension=2,
separation=1,
max_length="auto",
show=False,
):
# Store parameters
info = {
"Dimension": dimension,
"Delay": delay,
}

# Embedding
embedded = complexity_embedding(signal, delay=delay, dimension=dimension)
n = len(embedded)

# Set the maxiimum trajectory length to 10 times the delay
if max_length == "auto":
max_length = int(delay * 10)
if max_length >= n / 2:
max_length = n // 2

# Create KDTree and query for nearest neighbors
tree = sklearn.neighbors.KDTree(embedded, metric="euclidean")

# Query for nearest neighbors. To ensure we get a neighbor outside of the `separation`,
# k=1 is the point itself, k=2 is the nearest neighbor, and k=3 is the second nearest neighbor.
idx = tree.query(embedded, k=2 + separation, return_distance=False)

# The neighbor outside the `separation` region will be the last one in the returned list.
idx = idx[:, -1]

# Compute the average divergence for each trajectory length
trajectories = np.zeros(max_length)
for k in range(1, max_length + 1):
valid = np.where((np.arange(n - k) + k < n) & (idx[: n - k] + k < n))[0]

if valid.size == 0:
trajectories[k - 1] = -np.inf
continue

divergences = np.linalg.norm(
embedded[valid + k] - embedded[idx[valid] + k],
axis=1,
)
divergences = divergences[divergences > 0]
if len(divergences) == 0:
trajectories[k - 1] = np.nan
else:
trajectories[k - 1] = np.mean(np.log(divergences))

# Change point
x_axis = range(1, len(trajectories) + 1)
knee = find_knee(y=trajectories, x=x_axis, show=False, verbose=False)
info["Divergence_Rate"] = trajectories
info["Changepoint"] = knee

# Linear fit
slope, intercept = np.polyfit(x_axis[0:knee], trajectories[0:knee], 1)
if show is True:
plt.plot(np.arange(1, len(trajectories) + 1), trajectories)
plt.axvline(knee, color="red", label="Changepoint", linestyle="--")
plt.axline(
(0, intercept), slope=slope, color="orange", label="Least-squares Fit"
)
plt.ylim(bottom=np.min(trajectories))
plt.ylabel("Divergence Rate")
plt.title(f"Largest Lyapunov Exponent (slope of the line) = {slope:.3f}")
plt.legend()
return slope, info


def _complexity_lyapunov_rosenstein(
signal, delay=1, dimension=2, separation=1, len_trajectory=20, show=False, **kwargs
):
Expand Down
Loading