Skip to content

Develop metatree resume #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions bayesml/autoregressive/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@
.. math::
\mathrm{St}(x_{n+1}|m_\mathrm{p}, \lambda_\mathrm{p}, \nu_\mathrm{p})
= \frac{\Gamma (\nu_\mathrm{p}/2 + 1/2)}{\Gamma (\nu_\mathrm{p}/2)}
\left( \frac{m_\mathrm{p}}{\pi \nu_\mathrm{p}} \right)^{1/2}
\left( \frac{\lambda_\mathrm{p}}{\pi \nu_\mathrm{p}} \right)^{1/2}
\left[ 1 + \frac{\lambda_\mathrm{p}(x_{n+1}-m_\mathrm{p})^2}{\nu_\mathrm{p}} \right]^{-\nu_\mathrm{p}/2 - 1/2}.

.. math::
Expand All @@ -95,7 +95,7 @@
where the parameters are obtained from the hyperparameters of the posterior distribution as follows.

.. math::
m_\mathrm{p} &= \mu_n^\top \boldsymbol{x}'_n,\\
m_\mathrm{p} &= \boldsymbol{\mu}_n^\top \boldsymbol{x}'_n,\\
\lambda_\mathrm{p} &= \frac{\alpha_n}{\beta_n} (1 + (\boldsymbol{x}'_n)^\top \boldsymbol{\Lambda}_n^{-1} \boldsymbol{x}'_n)^{-1},\\
\nu_\mathrm{p} &= 2 \alpha_n.
"""
Expand Down
12 changes: 6 additions & 6 deletions bayesml/bernoulli/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
* :math:`B(\cdot,\cdot): \mathbb{R}_{>0} \times \mathbb{R}_{>0} \to \mathbb{R}_{>0}`: the Beta function

.. math::
p(\theta) = \mathrm{Beta}(\theta|\alpha_0,\beta_0) = \frac{1}{B(\alpha_0, \beta_0)} \theta^{\alpha_0} (1-\theta)^{\beta_0}.
p(\theta) = \mathrm{Beta}(\theta|\alpha_0,\beta_0) = \frac{1}{B(\alpha_0, \beta_0)} \theta^{\alpha_0 - 1} (1-\theta)^{\beta_0 - 1}.

.. math::
\mathbb{E}[\theta] &= \frac{\alpha_0}{\alpha_0 + \beta_0}, \\
Expand All @@ -36,11 +36,11 @@
* :math:`\beta_n \in \mathbb{R}_{>0}`: a hyperparameter

.. math::
p(\theta | x^n) = \mathrm{Beta}(\theta|\alpha_n,\beta_n) = \frac{1}{B(\alpha_n, \beta_n)} \theta^{\alpha_n} (1-\theta)^{\beta_n},
p(\theta | x^n) = \mathrm{Beta}(\theta|\alpha_n,\beta_n) = \frac{1}{B(\alpha_n, \beta_n)} \theta^{\alpha_n - 1} (1-\theta)^{\beta_n - 1},

.. math::
\mathbb{E}[\theta | x^n] &= \frac{\alpha_n}{\alpha_n + \beta_n}, \\
\mathbb{V}[\theta | x^n] &= \frac{\alpha_n \beta_n}{(\alpha_n + \beta_n)^2 (\alpha_n + \beta_n + 1)}.
\mathbb{V}[\theta | x^n] &= \frac{\alpha_n \beta_n}{(\alpha_n + \beta_n)^2 (\alpha_n + \beta_n + 1)},

where the updating rule of the hyperparameters is

Expand All @@ -56,16 +56,16 @@
* :math:`\theta_\mathrm{p} \in [0,1]`: a parameter

.. math::
p(x_{n+1} | x^n) = \mathrm{Bern}(x_{n+1}|\theta_\mathrm{p}) =\theta_\mathrm{p}^{x_{n+1}}(1-\theta_\mathrm{p})^{1-x_{n+1}}
p(x_{n+1} | x^n) = \mathrm{Bern}(x_{n+1}|\theta_\mathrm{p}) =\theta_\mathrm{p}^{x_{n+1}}(1-\theta_\mathrm{p})^{1-x_{n+1}},

.. math::
\mathbb{E}[x_{n+1} | x^n] &= \theta_\mathrm{p}, \\
\mathbb{V}[x_{n+1} | x^n] &= \theta_\mathrm{p} (1 - \theta_\mathrm{p}).
\mathbb{V}[x_{n+1} | x^n] &= \theta_\mathrm{p} (1 - \theta_\mathrm{p}),

where the parameters are obtained from the hyperparameters of the posterior distribution as follows.

.. math::
\theta_\mathrm{p} = \frac{\alpha_n}{\alpha_n + \beta_n}
\theta_\mathrm{p} = \frac{\alpha_n}{\alpha_n + \beta_n}.
"""

from ._bernoulli import GenModel
Expand Down
6 changes: 3 additions & 3 deletions bayesml/categorical/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

The prior distribution is as follows:

* :math:`\boldsymbol{\alpha}_0 \in \mathbb{R}_{>0}`: a hyperparameter
* :math:`\boldsymbol{\alpha}_0 \in \mathbb{R}_{>0}^d`: a hyperparameter
* :math:`\Gamma (\cdot)`: the gamma function
* :math:`\tilde{\alpha}_0 = \sum_{k=1}^d \alpha_{0,k}`
* :math:`C(\boldsymbol{\alpha}_0)=\frac{\Gamma(\tilde{\alpha}_0)}{\Gamma(\alpha_{0,1})\cdots\Gamma(\alpha_{0,d})}`
Expand Down Expand Up @@ -58,7 +58,7 @@

The predictive distribution is as follows:

* :math:`x_{n+1} \in \{ 0, 1\}^d`: a new data point
* :math:`\boldsymbol{x}_{n+1} \in \{ 0, 1\}^d`: a new data point
* :math:`\boldsymbol{\theta}_\mathrm{p} \in [0, 1]^d`: the hyperparameter of the posterior (:math:`\sum_{k=1}^d \theta_{\mathrm{p},k} = 1`)

.. math::
Expand All @@ -72,7 +72,7 @@
where the parameters are obtained from the hyperparameters of the posterior distribution as follows:

.. math::
\boldsymbol{\theta}_{\mathrm{p},k} = \frac{\alpha_{n,k}}{\sum_{k=1}^d \alpha_{n,k}}, \quad (k \in \{ 1, 2, \dots , d \}).
\theta_{\mathrm{p},k} = \frac{\alpha_{n,k}}{\sum_{k=1}^d \alpha_{n,k}}, \quad (k \in \{ 1, 2, \dots , d \}).
"""

from ._categorical import GenModel
Expand Down
8 changes: 4 additions & 4 deletions bayesml/exponential/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,13 @@

.. math::
\mathbb{E}[\lambda | x^n] &= \frac{\alpha_n}{\beta_n}, \\
\mathbb{V}[\lambda | x^n] &= \frac{\alpha_n}{\beta_n^2}.
\mathbb{V}[\lambda | x^n] &= \frac{\alpha_n}{\beta_n^2},

where the updating rule of the hyperparameters is

.. math::
\alpha_n &= \alpha_0 + n\\
\beta_n &= \beta_0 + \sum_{i=1}^n x_i
\alpha_n &= \alpha_0 + n,\\
\beta_n &= \beta_0 + \sum_{i=1}^n x_i.


The predictive distribution is as follows:
Expand All @@ -58,7 +58,7 @@
* :math:`\eta_\mathrm{p} \in \mathbb{R}_{>0}`: the hyperparameter of the posterior

.. math::
p(x_{n+1}|x^n)=\mathrm{Lomax}(x_{n+1}|\alpha_\mathrm{p},\eta_\mathrm{p}) = \frac{\alpha_\mathrm{p}}{\eta_\mathrm{p}}\left(1+\frac{x}{\eta_\mathrm{p}}\right)^{-(\alpha_\mathrm{p}+1)},
p(x_{n+1}|x^n)=\mathrm{Lomax}(x_{n+1}|\alpha_\mathrm{p},\eta_\mathrm{p}) = \frac{\alpha_\mathrm{p}}{\eta_\mathrm{p}}\left(1+\frac{x_{n+1}}{\eta_\mathrm{p}}\right)^{-(\alpha_\mathrm{p}+1)},

.. math::
\mathbb{E}[x_{n+1} | x^n] &=
Expand Down
6 changes: 3 additions & 3 deletions bayesml/linearregression/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
* :math:`d \in \mathbb N`: a dimension
* :math:`\boldsymbol{x} = [x_1, x_2, \dots , x_d] \in \mathbb{R}^d`: an explanatory variable. If you consider an intercept term, it should be included as one of the elements of :math:`\boldsymbol{x}`.
* :math:`y\in\mathbb{R}`: an objective variable
* :math:`\tau \in\mathbb{R}`: a parameter
* :math:`\tau \in\mathbb{R}_{>0}`: a parameter
* :math:`\boldsymbol{\theta}\in\mathbb{R}^{d}`: a parameter

.. math::
Expand All @@ -25,7 +25,7 @@
The prior distribution is as follows:

* :math:`\boldsymbol{\mu_0} \in \mathbb{R}^d`: a hyperparameter
* :math:`\boldsymbol{\Lambda_0} \in \mathbb{R}^{d\times d}`: a hyperparameter
* :math:`\boldsymbol{\Lambda_0} \in \mathbb{R}^{d\times d}`: a hyperparameter (a positive definite matrix)
* :math:`\alpha_0\in \mathbb{R}_{>0}`: a hyperparameter
* :math:`\beta_0\in \mathbb{R}_{>0}`: a hyperparameter

Expand Down Expand Up @@ -78,7 +78,7 @@

.. math::
p(y_{n+1} | \boldsymbol{X}, \boldsymbol{y}, \boldsymbol{x}_{n+1} ) &= \mathrm{St}\left(y_{n+1} \mid m_\mathrm{p}, \lambda_\mathrm{p}, \nu_\mathrm{p}\right) \\
&= \frac{\Gamma (\nu_\mathrm{p} / 2) + 1/2}{\Gamma (\nu_\mathrm{p} / 2)} \left( \frac{\lambda_\mathrm{p}}{\pi \nu_\mathrm{p}} \right)^{1/2} \left( 1 + \frac{\lambda_\mathrm{p} (y_{n+1} - m_\mathrm{p})^2}{\nu_\mathrm{p}} \right)^{-\nu_\mathrm{p}/2 - 1/2},
&= \frac{\Gamma (\nu_\mathrm{p} / 2 + 1/2 )}{\Gamma (\nu_\mathrm{p} / 2)} \left( \frac{\lambda_\mathrm{p}}{\pi \nu_\mathrm{p}} \right)^{1/2} \left( 1 + \frac{\lambda_\mathrm{p} (y_{n+1} - m_\mathrm{p})^2}{\nu_\mathrm{p}} \right)^{-\nu_\mathrm{p}/2 - 1/2},

.. math::
\mathbb{E}[y_{n+1} | \boldsymbol{X}, \boldsymbol{y}, \boldsymbol{x}_{n+1}] &= m_\mathrm{p} & (\nu_\mathrm{p} > 1), \\
Expand Down
Loading