Skip to content

Develop hiddenmarkovnormal #44

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 71 commits into from
Nov 20, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
26b1b9e
Create hiddenmarkovnormal.md
yuta-nakahara Jul 21, 2022
52d88bf
Create hiddenmarkovautoregressive.md
yuta-nakahara Jul 21, 2022
f523912
Update hiddenmarkovnormal.md
RyoheiO Jul 30, 2022
19868d1
Update hiddenmarkovnormal.md
RyoheiO Jul 30, 2022
1bd1f87
write generative model
RyoheiO Jul 30, 2022
6aeabad
Update HMM resume
RyoheiO Jul 31, 2022
b8d3942
Update hiddenmarkovnormal.md
RyoheiO Aug 2, 2022
9d4615e
Update hiddenmarkovautoregressive.md
yuta-nakahara Aug 3, 2022
6d98e1d
Merge pull request #17 from yuta-nakahara/develop-hiddenmarkovnormal-…
yuta-nakahara Aug 7, 2022
4c8237d
Merge branch 'develop' into develop-hiddenmarkovnormal
yuta-nakahara Aug 9, 2022
4b83991
Merge branch 'develop-check' into develop-hiddenmarkovnormal
yuta-nakahara Aug 9, 2022
f247c6d
Add files via upload
kkkzmwsd Aug 25, 2022
28054a1
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
5cd4b92
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
cd0ec46
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
a7b415a
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
b21e930
Create hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
2360a2f
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
f0ccfaf
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 2, 2022
4fb4c20
Update hiddenmarkovautoregressive.md
kkkzmwsd Sep 3, 2022
7d2ac37
Delete hiddenmarkovautoregressive.md
kkkzmwsd Sep 3, 2022
2008345
Merge pull request #23 from yuta-nakahara/develop-hiddenmarkovautoreg…
yuta-nakahara Sep 3, 2022
239f0a2
start my work
NJ-private Sep 11, 2022
849c69d
add params for data generation without hyper params
NJ-private Sep 11, 2022
ad521e8
add params for data generation without hyper params
NJ-private Sep 11, 2022
5191554
update posterior distribution and predictive distributio
RyoheiO Sep 17, 2022
6df9a50
make self.__init__() for hiddenmarkovautoregressive and add check dim…
NJ-private Sep 18, 2022
0708936
add set_h_params
NJ-private Sep 18, 2022
9723a6d
add new check consistency
NJ-private Sep 23, 2022
539e963
bug fix
NJ-private Sep 23, 2022
8a37e7a
some modify
NJ-private Oct 2, 2022
ac6208c
some modify
NJ-private Oct 9, 2022
bb82eb4
add set_params and get_params and some modify of __init__
NJ-private Oct 9, 2022
37fa84d
init function
yuta-nakahara Oct 9, 2022
8522517
Merge branch 'develop-check' into develop-hiddenmarkovnormal-initgetset
yuta-nakahara Oct 9, 2022
8a03f2c
Merge branch 'develop-hiddenmarkovautoregressive-GenModel' into devel…
yuta-nakahara Oct 9, 2022
c09204f
add set_params
NJ-private Oct 9, 2022
571a7c0
little modify
NJ-private Oct 9, 2022
9cd44c2
modify set_params and add shape_consistency
NJ-private Oct 10, 2022
bd174ef
little modify and add set_h_params before eta and zeta
NJ-private Oct 10, 2022
de87b59
Merge branch 'develop-check' into develop-hiddenmarkovnormal-initgetset
yuta-nakahara Oct 11, 2022
d066598
Revise set_params and set_h_params
yuta-nakahara Oct 11, 2022
e458e3c
Added get_params and get_h_params
yuta-nakahara Oct 12, 2022
dc67df4
Add set, get, etc to LearnModel
yuta-nakahara Oct 12, 2022
b0ed3b0
Merge branch 'develop-base' into develop-hiddenmarkovnormal-initgetset
yuta-nakahara Oct 13, 2022
ff1d77e
Delete rest_hn_params and overwrite_h0_params
yuta-nakahara Oct 13, 2022
fd91b6b
little modify
yuta-nakahara Oct 13, 2022
8bfd21a
Merge pull request #30 from yuta-nakahara/develop-hiddenmarkovnormal-…
yuta-nakahara Oct 13, 2022
76667f1
Update hiddenmarkovnormal.md
RyoheiO Oct 22, 2022
a13ad15
Revise posterior and predictive distribution
yuta-nakahara Oct 27, 2022
13cb44b
Merge pull request #25 from yuta-nakahara/develop-hiddenmarkovnormal-…
yuta-nakahara Oct 27, 2022
88dee13
Update _check.py
yuta-nakahara Nov 15, 2022
4a863ac
Update _check.py
yuta-nakahara Nov 15, 2022
d1e199a
Merge branch 'develop' into develop-hiddenmarkovnormal-GenModel
yuta-nakahara Nov 17, 2022
8edd787
Create _gaussianmixture_for_ref.py
yuta-nakahara Nov 17, 2022
cb81f22
Add docstring
yuta-nakahara Nov 17, 2022
9b7dc1d
Add gen_ and visualize
yuta-nakahara Nov 17, 2022
6d2d639
Merge pull request #38 from yuta-nakahara/develop-hiddenmarkovnormal-…
yuta-nakahara Nov 17, 2022
bc51400
Add docstring
yuta-nakahara Nov 17, 2022
151f00a
Add calc_vl
yuta-nakahara Nov 18, 2022
07ec7e4
Add VB alg
yuta-nakahara Nov 19, 2022
9537119
Add visualize_posterior
yuta-nakahara Nov 19, 2022
757ac61
Add pred functions
yuta-nakahara Nov 19, 2022
4e1bccb
Add estimate_latent_vars
yuta-nakahara Nov 19, 2022
b3f1672
Merge pull request #39 from yuta-nakahara/develop-hiddenmarkovnormal-…
yuta-nakahara Nov 19, 2022
fc9e382
Remove unnecessary files
yuta-nakahara Nov 20, 2022
86eb7dd
Delete __init__.py
yuta-nakahara Nov 20, 2022
156ae33
Delete hiddenmarkovautoregressive.md
yuta-nakahara Nov 20, 2022
a46a031
Delete _hiddenmarkovautoregressive.py
yuta-nakahara Nov 20, 2022
bc14ef9
Merge branch 'develop-check' into develop-hiddenmarkovnormal
yuta-nakahara Nov 20, 2022
e9df92d
Update _check.py
yuta-nakahara Nov 20, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions bayesml/_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,30 @@ def nonneg_ints(val,val_name,exception_class):
return val
raise(exception_class(val_name + " must be int or a numpy.ndarray whose dtype is int. Its values must be non-negative (including 0)."))

def int_vec(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.integer) and val.ndim == 1:
return val
raise(exception_class(val_name + " must be a 1-dimensional numpy.ndarray whose dtype is int."))

def nonneg_int_vec(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.integer) and val.ndim == 1 and np.all(val>=0):
return val
raise(exception_class(val_name + " must be a 1-dimensional numpy.ndarray whose dtype is int. Its values must be non-negative (including 0)."))

def nonneg_int_vecs(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.integer) and val.ndim >= 1 and np.all(val>=0):
return val
raise(exception_class(val_name + " must be a numpy.ndarray whose ndim >= 1 and dtype is int. Its values must be non-negative (including 0)."))

def nonneg_float_vec(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.floating) and val.ndim == 1 and np.all(val>=0):
return val
raise(exception_class(val_name + " must be a 1-dimensional numpy.ndarray whose dtype is float. Its values must be non-negative (including 0)."))

def int_of_01(val,val_name,exception_class):
if np.issubdtype(type(val),np.integer):
if val == 0 or val ==1:
Expand Down Expand Up @@ -173,6 +191,14 @@ def float_vecs(val,val_name,exception_class):
return val
raise(exception_class(val_name + " must be a numpy.ndarray whose ndim >= 1."))

def pos_float_vecs(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.integer) and val.ndim >= 1 and np.all(val>0):
return val.astype(float)
if np.issubdtype(val.dtype,np.floating) and val.ndim >= 1 and np.all(val>0.0):
return val
raise(exception_class(val_name + " must be a 1-dimensional numpy.ndarray. Its values must be positive (not including 0)"))

def float_vec_sum_1(val,val_name,exception_class):
if type(val) is np.ndarray:
if np.issubdtype(val.dtype,np.integer) and val.ndim == 1 and abs(val.sum() - 1.) <= _EPSILON:
Expand Down
4 changes: 2 additions & 2 deletions bayesml/autoregressive/_autoregressive.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ class GenModel(base.Generative):
h_mu_vec : numpy ndarray, optional
a vector of real numbers, by default [0.0, 0.0, ... , 0.0]
h_lambda_mat : numpy ndarray, optional
a positibe definate matrix, by default the identity matrix
a positive definate matrix, by default the identity matrix
h_alpha : float, optional
a positive real number, by default 1.0
h_beta : float, optional
a positibe real number, by default 1.0
a positive real number, by default 1.0
seed : {None, int}, optional
A seed to initialize numpy.random.default_rng(),
by default None
Expand Down
132 changes: 132 additions & 0 deletions bayesml/hiddenmarkovnormal/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# Document Author
# Ryohei Oka <o.ryohei07@gmail.com>
r"""
The hidden Markov model with the Gauss-Wishart prior distribution and the Dirichlet prior distribution.

The stochastic data generative model is as follows:

* :math:`K \in \mathbb{N}`: number of latent classes
* :math:`\boldsymbol{z} \in \{ 0, 1 \}^K`: a one-hot vector representing the latent class (latent variable)
* :math:`\boldsymbol{\pi} \in [0, 1]^K`: a parameter for latent classes, (:math:`\sum_{k=1}^K \pi_k=1`)
* :math:`a_{j,k} \in [0,1]` : transition probability to latent state k under latent state j
* :math:`\boldsymbol{a}_j = [a_{j,1}, a_{j,2}, \dots , a_{j,K}]\in [0,1]^K`, a vector of the transition probability (:math:`\sum_{k=1}^K a_{j,k}=1`)
* :math:`\boldsymbol{A}=(a_{j,k})_{1\leq j,k\leq K} \in [0, 1]^{K\times K}`: a matrix of the transition probability
* :math:`D \in \mathbb{N}`: a dimension of data
* :math:`\boldsymbol{x} \in \mathbb{R}^D`: a data point
* :math:`\boldsymbol{\mu}_k \in \mathbb{R}^D`: a parameter
* :math:`\boldsymbol{\mu} = \{ \boldsymbol{\mu}_k \}_{k=1}^K`
* :math:`\boldsymbol{\Lambda}_k \in \mathbb{R}^{D\times D}` : a parameter (a positive definite matrix)
* :math:`\boldsymbol{\Lambda} = \{ \boldsymbol{\Lambda}_k \}_{k=1}^K`
* :math:`| \boldsymbol{\Lambda}_k | \in \mathbb{R}`: the determinant of :math:`\boldsymbol{\Lambda}_k`

.. math::
p(\boldsymbol{z}_{1} | \boldsymbol{\pi}) &= \mathrm{Cat}(\boldsymbol{z}_{1}|\boldsymbol{\pi}) = \prod_{k=1}^K \pi_k^{z_{1,k}},\\
p(\boldsymbol{z}_{n} |\boldsymbol{z}_{n-1} ,\boldsymbol{A}) &= \prod_{k=1}^K \prod_{j=1}^K a_{j,k}^{z_{n-1,j}z_{n,k}},\\
p(\boldsymbol{x}_{n} | \boldsymbol{\mu}, \boldsymbol{\Lambda}, \boldsymbol{z}_{n}) &= \prod_{k=1}^K \mathcal{N}(\boldsymbol{x}|\boldsymbol{\mu}_k,\boldsymbol{\Lambda}_k^{-1})^{z_{n,k}} \\
&= \prod_{k=1}^K \left( \frac{| \boldsymbol{\Lambda}_{k} |^{1/2}}{(2\pi)^{D/2}} \exp \left\{ -\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_{k})^\top \boldsymbol{\Lambda}_{k} (\boldsymbol{x}-\boldsymbol{\mu}_{k}) \right\} \right)^{z_{n,k}},

The prior distribution is as follows:

* :math:`\boldsymbol{m}_0 \in \mathbb{R}^{D}`: a hyperparameter
* :math:`\kappa_0 \in \mathbb{R}_{>0}`: a hyperparameter
* :math:`\nu_0 \in \mathbb{R}`: a hyperparameter (:math:`\nu_0 > D-1`)
* :math:`\boldsymbol{W}_0 \in \mathbb{R}^{D\times D}`: a hyperparameter (a positive definite matrix)
* :math:`\boldsymbol{\eta}_0 \in \mathbb{R}_{> 0}^K`: a hyperparameter
* :math:`\boldsymbol{\zeta}_{0,j} \in \mathbb{R}_{> 0}^K`: a hyperparameter
* :math:`\mathrm{Tr} \{ \cdot \}`: a trace of a matrix
* :math:`\Gamma (\cdot)`: the gamma function

.. math::
p(\boldsymbol{\mu},\boldsymbol{\Lambda},\boldsymbol{\pi},\boldsymbol{A}) &= \left\{ \prod_{k=1}^K \mathcal{N}(\boldsymbol{\mu}_k|\boldsymbol{m}_0,(\kappa_0 \boldsymbol{\Lambda}_k)^{-1})\mathcal{W}(\boldsymbol{\Lambda}_k|\boldsymbol{W}_0, \nu_0) \right\} \mathrm{Dir}(\boldsymbol{\pi}|\boldsymbol{\eta}_0) \prod_{j=1}^{K}\mathrm{Dir}(\boldsymbol{a}_{j}|\boldsymbol{\zeta}_{0,j}), \\
&= \Biggl[ \prod_{k=1}^K \left( \frac{\kappa_0}{2\pi} \right)^{D/2} |\boldsymbol{\Lambda}_k|^{1/2} \exp \left\{ -\frac{\kappa_0}{2}(\boldsymbol{\mu}_k -\boldsymbol{m}_0)^\top \boldsymbol{\Lambda}_k (\boldsymbol{\mu}_k - \boldsymbol{m}_0) \right\} \\
&\qquad \times B(\boldsymbol{W}_0, \nu_0) | \boldsymbol{\Lambda}_k |^{(\nu_0 - D - 1) / 2} \exp \left\{ -\frac{1}{2} \mathrm{Tr} \{ \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda}_k \} \right\}\biggl] \\
&\qquad \times \Biggl[ \prod_{k=1}^KC(\boldsymbol{\eta}_0)\pi_k^{\eta_{0,k}-1}\biggl]\\
&\qquad \times \biggl[\prod_{j=1}^KC(\boldsymbol{\zeta}_{0,j})\prod_{k=1}^K a_{j,k}^{\zeta_{0,j,k}-1}\Biggr],\\

where :math:`B(\boldsymbol{W}_0, \nu_0)` and :math:`C(\boldsymbol{\eta}_0)` are defined as follows:

.. math::
B(\boldsymbol{W}_0, \nu_0) &= | \boldsymbol{W}_0 |^{-\nu_0 / 2} \left( 2^{\nu_0 D / 2} \pi^{D(D-1)/4} \prod_{i=1}^D \Gamma \left( \frac{\nu_0 + 1 - i}{2} \right) \right)^{-1}, \\
C(\boldsymbol{\eta}_0) &= \frac{\Gamma(\sum_{k=1}^K \eta_{0,k})}{\Gamma(\eta_{0,1})\cdots\Gamma(\eta_{0,K})},\\
C(\boldsymbol{\zeta}_{0,j}) &= \frac{\Gamma(\sum_{k=1}^K \zeta_{0,j,k})}{\Gamma(\zeta_{0,j,1})\cdots\Gamma(\zeta_{0,j,K})}.

The apporoximate posterior distribution in the :math:`t`-th iteration of a variational Bayesian method is as follows:

* :math:`\boldsymbol{x}^n = (\boldsymbol{x}_1, \boldsymbol{x}_2, \dots , \boldsymbol{x}_n) \in \mathbb{R}^{D \times n}`: given data
* :math:`\boldsymbol{z}^n = (\boldsymbol{z}_1, \boldsymbol{z}_2, \dots , \boldsymbol{z}_n) \in \{ 0, 1 \}^{K \times n}`: latent classes of given data
* :math:`\boldsymbol{m}_{n,k}^{(t)} \in \mathbb{R}^{D}`: a hyperparameter
* :math:`\kappa_{n,k}^{(t)} \in \mathbb{R}_{>0}`: a hyperparameter
* :math:`\nu_{n,k}^{(t)} \in \mathbb{R}`: a hyperparameter :math:`(\nu_n > D-1)`
* :math:`\boldsymbol{W}_{n,k}^{(t)} \in \mathbb{R}^{D\times D}`: a hyperparameter (a positive definite matrix)
* :math:`\boldsymbol{\eta}_n^{(t)} \in \mathbb{R}_{> 0}^K`: a hyperparameter
* :math:`\boldsymbol{\zeta}_{n,j}^{(t)} \in \mathbb{R}_{> 0}^K`: a hyperparameter

.. math::
&q(\boldsymbol{z}^n, \boldsymbol{\mu},\boldsymbol{\Lambda},\boldsymbol{\pi},\boldsymbol{A}) \nonumber \\
&= q^{(t)}(\boldsymbol{z}^n) \left\{ \prod_{k=1}^K \mathcal{N}(\boldsymbol{\mu}_k|\boldsymbol{m}_{n,k}^{(t)},(\kappa_{n,k}^{(t)} \boldsymbol{\Lambda}_k)^{-1})\mathcal{W}(\boldsymbol{\Lambda}_k|\boldsymbol{W}_{n,k}^{(t)}, \nu_{n,k}^{(t)}) \right\} \mathrm{Dir}(\boldsymbol{\pi}|\boldsymbol{\eta}_n^{(t)})\left\{\prod_{j=1}^K\mathrm{Dir}(\boldsymbol{a}_j|\boldsymbol{\zeta}_{n,j}^{(t)})\right\}, \\
&= q^{(t)}(\boldsymbol{z}^n) \Biggl[ \prod_{k=1}^K \left( \frac{\kappa_{n,k}^{(t)}}{2\pi} \right)^{D/2} |\boldsymbol{\Lambda}_k|^{1/2} \exp \left\{ -\frac{\kappa_{n,k}^{(t)}}{2}(\boldsymbol{\mu}_k -\boldsymbol{m}_{n,k}^{(t)})^\top \boldsymbol{\Lambda}_k (\boldsymbol{\mu}_k - \boldsymbol{m}_{n,k}^{(t)}) \right\} \\
&\qquad \times B(\boldsymbol{W}_{n,k}^{(t)}, \nu_{n,k}^{(t)}) | \boldsymbol{\Lambda}_k |^{(\nu_{n,k}^{(t)} - D - 1) / 2} \exp \left\{ -\frac{1}{2} \mathrm{Tr} \{ ( \boldsymbol{W}_{n,k}^{(t)} )^{-1} \boldsymbol{\Lambda}_k \} \right\} \Biggr] \\
&\qquad \times C(\boldsymbol{\eta}_n^{(t)})\prod_{k=1}^K \pi_k^{\eta_{n,k}^{(t)}-1}\left[\prod_{j=1}^K C(\boldsymbol{\zeta}_{n,j}^{(t)})\prod_{k=1}^K a_{j,k}^{\zeta_{n,j,k}^{(t)}-1}\right],\\

where the updating rule of the hyperparameters is as follows.

.. math::
N_k^{(t)} &= \sum_{i=1}^n \gamma^{(t)}_{i,k}, \\
M_{j,k}^{(t)} &= \sum_{i=2}^n \xi^{(t)}_{i,j,k},\\
\bar{\boldsymbol{x}}_k^{(t)} &= \frac{1}{N_k^{(t)}} \sum_{i=1}^n \gamma^{(t)}_{i,k} \boldsymbol{x}_i, \\
S_k^{(t)} &= \frac{1}{N_k^{(t)}}\sum_{i=1}^n \gamma^{(t)}_{i,k} (x_i-\bar{\boldsymbol{x}}_k^{(t)})(x_i-\bar{\boldsymbol{x}}_k^{(t)})^{\top},\\
\boldsymbol{m}_{n,k}^{(t+1)} &= \frac{\kappa_0\boldsymbol{\mu}_0 + N_k^{(t)} \bar{\boldsymbol{x}}_k^{(t)}}{\kappa_0 + N_k^{(t)}}, \\
\kappa_{n,k}^{(t+1)} &= \kappa_0 + N_k^{(t)}, \\
(\boldsymbol{W}_{n,k}^{(t+1)})^{-1} &= \boldsymbol{W}_0^{-1} + N_k^{(t)}S_k^{(t)} + \frac{\kappa_0 N_k^{(t)}}{\kappa_0 + N_k^{(t)}}(\bar{\boldsymbol{x}}_k^{(t)}-\boldsymbol{\mu}_0)(\bar{\boldsymbol{x}}_k^{(t)}-\boldsymbol{\mu}_0)^\top, \\
\nu_{n,k}^{(t+1)} &= \nu_0 + N_k^{(t)},\\
\eta_{n,k}^{(t+1)} &= \eta_{0,k} + \gamma^{(t)}_{1,k}, \\
\zeta_{n,j,k}^{(t+1)} &= \zeta_{0,j,k}+M_{j,k}^{(t)}.

The approximate posterior distribution of the latent variable :math:`q^{(t+1)}(z^n)` is calculated by the forward-backward algorithm as follows.

.. math::
\ln \rho_{i,k}^{(t+1)} &= \frac{1}{2} \Biggl[\, \sum_{d=1}^D \psi \left( \frac{\nu_{n,k}^{(t+1)} + 1 - d}{2} \right) + D \ln 2 + \ln | \boldsymbol{W}_{n,k}^{(t+1)} | \notag \\
&\qquad - D \ln (2 \pi ) - \frac{D}{\kappa_{n,k}^{(t+1)}} - \nu_{n,k}^{(t+1)} (\boldsymbol{x}_i - \boldsymbol{m}_{n,k}^{(t+1)})^\top \boldsymbol{W}_{n,k}^{(t+1)} (\boldsymbol{x}_i - \boldsymbol{m}_{n,k}^{(t+1)}) \Biggr], \\
\ln \tilde{\pi}_k^{(t+1)} &= \psi (\eta_{n,k}^{(t+1)}) - \psi \left( \textstyle \sum_{k=1}^K \eta_{n,k}^{(t+1)} \right) \\
\ln \tilde{a}_{j,k}^{(t+1)} &= \psi (\zeta_{n,j,k}^{(t+1)}) - \psi \left( \textstyle \sum_{k=1}^K \zeta_{n,j,k}^{(t+1)} \right) \\
\alpha^{(t+1)} (\boldsymbol{z}_i) &\propto
\begin{cases}
\prod_{k=1}^{K} \left( \rho_{i,k}^{(t+1)}\right)^{z_{i,k}} \sum_{\boldsymbol{z}_{i-1}} \left[\prod_{k=1}^{K}\prod_{j=1}^{K}\left(\tilde{a}^{(t+1)}_{j,k}\right)^{z_{i-1,j}z_{i,k}}\alpha^{(t+1)}(\boldsymbol{z}_{i-1})\right] & (i>1)\\
\prod_{k=1}^{K}\left( \rho_{1,k}^{(t+1)} \tilde{\pi}_k^{(t+1)} \right)^{z_{1,k}} & (i=1)
\end{cases} \\
\beta^{(t+1)} (\boldsymbol{z}_i) &\propto
\begin{cases}
\sum_{\boldsymbol{z}_{i+1}} \left[ \prod_{k=1}^{K} \left( \rho_{i+1,k}^{(t+1)}\right)^{z_{i+1,k}} \prod_{k=1}^{K}\prod_{j=1}^{K}\left(\tilde{a}^{(t+1)}_{j,k}\right)^{z_{i,j}z_{i+1,k}}\beta^{(t+1)}(\boldsymbol{z}_{i+1})\right] & (i<n)\\
1 & (i=n)
\end{cases} \\
q^{(t+1)}(\boldsymbol{z}_i) &\propto \alpha^{(t+1)}(\boldsymbol{z}_i)\beta^{(t+1)}(\boldsymbol{z}_i) \\
\gamma^{(t+1)}_{i,k} &= \sum_{\boldsymbol{z}_i} q^{(t+1)}(\boldsymbol{z}_i) z_{i,k}\\
q^{(t+1)}(\boldsymbol{z}_{i-1}, \boldsymbol{z}_{i}) &\propto \alpha^{(t+1)}(\boldsymbol{z}_{i-1}) \prod_{k=1}^{K} \left( \rho_{i,k}^{(t+1)}\right)^{z_{i,k}} \prod_{k=1}^{K}\prod_{j=1}^{K}\left(\tilde{a}^{(t+1)}_{j,k}\right)^{z_{i-1,j}z_{i,k}} \beta^{(t+1)}(\boldsymbol{z}_i) \\
\xi^{(t+1)}_{i,j,k} &= \sum_{\boldsymbol{z}_{i-1}} \sum_{\boldsymbol{z}_i} q^{(t+1)}(\boldsymbol{z}_{i-1}, \boldsymbol{z}_{i}) z_{i-1,j} z_{i,k}

The approximate predictive distribution is as follows:

* :math:`\boldsymbol{x}_{n+1} \in \mathbb{R}^D`: a new data point
* :math:`(a_{\mathrm{p},j,k})_{1\leq j,k\leq K} \in [0, 1]^{K\times K}`: the parameters of the predictive transition probability of latent classes, (:math:`\sum_{k=1}^K a_{\mathrm{p},j,k}=1`)
* :math:`\boldsymbol{\mu}_{\mathrm{p},k} \in \mathbb{R}^D`: the parameter of the predictive distribution
* :math:`\boldsymbol{\Lambda}_{\mathrm{p},k} \in \mathbb{R}^{D \times D}`: the parameter of the predictive distribution (a positive definite matrix)
* :math:`\nu_{\mathrm{p},k} \in \mathbb{R}_{>0}`: the parameter of the predictive distribution

.. math::
&p(x_{n+1}|x^n) \\
&\approx \sum_{k=1}^K \left( \sum_{j=1}^K \gamma_{n,j}^{(t)} a_{\mathrm{p},j,k} \right) \mathrm{St}(x_{n+1}|\boldsymbol{\mu}_{\mathrm{p},k},\boldsymbol{\Lambda}_{\mathrm{p},k}, \nu_{\mathrm{p},k}) \\
&= \sum_{k=1}^K \left( \sum_{j=1}^K \gamma_{n,j}^{(t)} a_{\mathrm{p},j,k} \right)\Biggl[ \frac{\Gamma (\nu_{\mathrm{p},k} / 2 + D / 2)}{\Gamma (\nu_{\mathrm{p},k} / 2)} \frac{|\boldsymbol{\Lambda}_{\mathrm{p},k}|^{1/2}}{(\nu_{\mathrm{p},k} \pi)^{D/2}} \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \times \left( 1 + \frac{1}{\nu_{\mathrm{p},k}} (\boldsymbol{x}_{n+1} - \boldsymbol{\mu}_{\mathrm{p},k})^\top \boldsymbol{\Lambda}_{\mathrm{p},k} (\boldsymbol{x}_{n+1} - \boldsymbol{\mu}_{\mathrm{p},k}) \right)^{-\nu_{\mathrm{p},k}/2 - D/2} \Biggr],

where the parameters are obtained from the hyperparameters of the predictive distribution as follows:

.. math::
a_{\mathrm{p},j,k} &= \frac{\zeta_{n,j,k}^{(t)}}{\sum_{k=1}^K \zeta_{n,j,k}^{(t)}}, \\
\boldsymbol{\mu}_{\mathrm{p},k} &= \boldsymbol{m}_{n,k}^{(t)}, \\
\boldsymbol{\Lambda}_{\mathrm{p},k} &= \frac{\kappa_{n,k}^{(t)} (\nu_{n,k}^{(t)} - D + 1)}{\kappa_{n,k}^{(t)} + 1} \boldsymbol{W}_{n,k}^{(t)}, \\
\nu_{\mathrm{p},k} &= \nu_{n,k}^{(t)} - D + 1.
"""
from ._hiddenmarkovnormal import GenModel
from ._hiddenmarkovnormal import LearnModel

__all__ = ["GenModel","LearnModel"]
Loading