Skip to content

Commit

Permalink
minor changes to getting started and delving in
Browse files Browse the repository at this point in the history
  • Loading branch information
dustinvtran committed Sep 24, 2016
1 parent 5229547 commit a99b4c6
Show file tree
Hide file tree
Showing 11 changed files with 81 additions and 47 deletions.
7 changes: 7 additions & 0 deletions docs/source/edward.inferences.inference.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.inference module
==================================

.. automodule:: edward.inferences.inference
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/source/edward.inferences.klpq.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.klpq module
=============================

.. automodule:: edward.inferences.klpq
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/source/edward.inferences.klqp.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.klqp module
=============================

.. automodule:: edward.inferences.klqp
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/source/edward.inferences.map.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.map module
============================

.. automodule:: edward.inferences.map
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/source/edward.inferences.monte_carlo.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.monte_carlo module
====================================

.. automodule:: edward.inferences.monte_carlo
:members:
:undoc-members:
:show-inheritance:
17 changes: 15 additions & 2 deletions docs/source/edward.inferences.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,20 @@
edward.inferences module
========================
edward.inferences package
=========================

.. automodule:: edward.inferences
:members:
:undoc-members:
:show-inheritance:

Submodules
----------

.. toctree::

edward.inferences.inference
edward.inferences.klpq
edward.inferences.klqp
edward.inferences.map
edward.inferences.monte_carlo
edward.inferences.variational_inference

7 changes: 7 additions & 0 deletions docs/source/edward.inferences.variational_inference.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
edward.inferences.variational_inference module
==============================================

.. automodule:: edward.inferences.variational_inference
:members:
:undoc-members:
:show-inheritance:
2 changes: 1 addition & 1 deletion docs/source/edward.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Subpackages

.. toctree::

edward.inferences
edward.models
edward.stats

Expand All @@ -20,7 +21,6 @@ Submodules
.. toctree::

edward.criticisms
edward.inferences
edward.util
edward.version

12 changes: 4 additions & 8 deletions docs/tex/delving-in.tex
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,6 @@ \subsubsection{Inference}\label{inference}
describes our best guess of the hidden structure, and its variance
describes the uncertainty around our best guess.

%The posterior distribution is typically intractable.
Edward has many inference algorithms and makes it easy
to develop new ones.

Expand All @@ -89,13 +88,10 @@ \subsubsection{Inference}\label{inference}
Edward focuses on variational inference. It views posterior inference
as positing a model of the latent variables $q(z \;;\; \lambda)$ and optimizing it to
approximate the posterior $p(z \mid x)$.
%which seeks to match the variational model to the
%posterior of the probability model.


Variational models use the same language of random variables to infer
latent variables of a probability model.
For example, to specify a variational model (e.g.~for a Gaussian mixture)
Variational models are defined using the same language of random
variables. For example, to specify a variational model (e.g.~for a
Gaussian mixture)
\begin{align*}
q(z \;;\; \lambda)
&=
Expand Down Expand Up @@ -135,7 +131,7 @@ \subsubsection{Inference}\label{inference}
run the inference.
\begin{lstlisting}[language=Python]
data = {x: x_data}
inference = ed.KLpq({pi: qpi, mu: qmu, sigma: qsigma}, data, model)
inference = ed.KLpq({pi: qpi, mu: qmu, sigma: qsigma}, data)
inference.run(n_iter=500, n_minibatch=5)
\end{lstlisting}
This runs the \texttt{KLpq} minimization algorithm for \texttt{500} iterations,
Expand Down
25 changes: 8 additions & 17 deletions docs/tex/getting-started.tex
Original file line number Diff line number Diff line change
Expand Up @@ -38,40 +38,32 @@ \subsubsection{Your first Edward program}
y_train = np.cos(x_train) + norm.rvs(0, 0.1, size=50)
\end{lstlisting}
\includegraphics[width=700px]{images/getting-started-fig0.png}
Next, define a Bayesian neural network with one hidden layer.
Next, define a two-layer Bayesian neural network. Here, we
define the neural network manually with \texttt{tanh} nonlinearities.
\begin{lstlisting}[language=Python]
import tensorflow as tf
from edward.models import Normal

W_0 = Normal(mu=tf.zeros([1, 2]), sigma=tf.ones([1, 2]))
W_1 = Normal(mu=tf.zeros([2, 2]), sigma=tf.ones([2, 2]))
W_2 = Normal(mu=tf.zeros([2, 1]), sigma=tf.ones([2, 1]))
W_1 = Normal(mu=tf.zeros([2, 1]), sigma=tf.ones([2, 1]))
b_0 = Normal(mu=tf.zeros(2), sigma=tf.ones(2))
b_1 = Normal(mu=tf.zeros(2), sigma=tf.ones(2))
b_2 = Normal(mu=tf.zeros(1), sigma=tf.ones(1))
b_1 = Normal(mu=tf.zeros(1), sigma=tf.ones(1))

x = tf.convert_to_tensor(x_train, dtype=tf.float32)
y = Normal(mu=neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2),
y = Normal(mu=tf.matmul(tf.nn.tanh(tf.matmul(x, W_0) + b_0), W_1) + b_1,
sigma=0.1)
\end{lstlisting}

Next, make inferences about the model from data.
Edward focuses on variational inference. Specify a normal
approximation over the weights and biases.
\begin{lstlisting}[language=Python]
from edward.models import Normal

qW_0 = Normal(mu=tf.Variable(tf.random_normal([1, 2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([1, 2]))))
qW_1 = Normal(mu=tf.Variable(tf.random_normal([2, 2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2, 2]))))
qW_2 = Normal(mu=tf.Variable(tf.random_normal([2, 1])),
qW_1 = Normal(mu=tf.Variable(tf.random_normal([2, 1])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2, 1]))))
qb_0 = Normal(mu=tf.Variable(tf.random_normal([2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2]))))
qb_1 = Normal(mu=tf.Variable(tf.random_normal([2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2]))))
qb_2 = Normal(mu=tf.Variable(tf.random_normal([1])),
qb_1 = Normal(mu=tf.Variable(tf.random_normal([1])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([1]))))
\end{lstlisting}
Defining \texttt{tf.Variable} allows the variational factors'
Expand All @@ -89,8 +81,7 @@ \subsubsection{Your first Edward program}

data = {y: y_train}
inference = ed.MFVI({W_0: qW_0, b_0: qb_0,
W_1: qW_1, b_1: qb_1,
W_2: qW_2, b_2: qb_2}, data)
W_1: qW_1, b_1: qb_1}, data)
inference.run(n_iter=1000)
\end{lstlisting}

Expand Down
30 changes: 11 additions & 19 deletions examples/getting_started_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,9 @@ def build_toy_dataset(N=50, noise_std=0.1):
return x, y


def neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2):
def neural_network(x, W_0, W_1, b_0, b_1):
h = tf.nn.tanh(tf.matmul(x, W_0) + b_0)
h = tf.nn.tanh(tf.matmul(h, W_1) + b_1)
h = tf.matmul(h, W_2) + b_2
h = tf.matmul(h, W_1) + b_1
return tf.reshape(h, [-1])


Expand All @@ -50,34 +49,27 @@ def neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2):

# MODEL
W_0 = Normal(mu=tf.zeros([D, 2]), sigma=tf.ones([D, 2]))
W_1 = Normal(mu=tf.zeros([2, 2]), sigma=tf.ones([2, 2]))
W_2 = Normal(mu=tf.zeros([2, 1]), sigma=tf.ones([2, 1]))
W_1 = Normal(mu=tf.zeros([2, 1]), sigma=tf.ones([2, 1]))
b_0 = Normal(mu=tf.zeros(2), sigma=tf.ones(2))
b_1 = Normal(mu=tf.zeros(2), sigma=tf.ones(2))
b_2 = Normal(mu=tf.zeros(1), sigma=tf.ones(1))
b_1 = Normal(mu=tf.zeros(1), sigma=tf.ones(1))

x = tf.convert_to_tensor(x_train, dtype=tf.float32)
y = Normal(mu=neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2),
y = Normal(mu=neural_network(x, W_0, W_1, b_0, b_1),
sigma=0.1 * tf.ones(N))

# INFERENCE
qW_0 = Normal(mu=tf.Variable(tf.random_normal([D, 2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([D, 2]))))
qW_1 = Normal(mu=tf.Variable(tf.random_normal([2, 2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2, 2]))))
qW_2 = Normal(mu=tf.Variable(tf.random_normal([2, 1])),
qW_1 = Normal(mu=tf.Variable(tf.random_normal([2, 1])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2, 1]))))
qb_0 = Normal(mu=tf.Variable(tf.random_normal([2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2]))))
qb_1 = Normal(mu=tf.Variable(tf.random_normal([2])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([2]))))
qb_2 = Normal(mu=tf.Variable(tf.random_normal([1])),
qb_1 = Normal(mu=tf.Variable(tf.random_normal([1])),
sigma=tf.nn.softplus(tf.Variable(tf.random_normal([1]))))

data = {y: y_train}
inference = ed.MFVI({W_0: qW_0, b_0: qb_0,
W_1: qW_1, b_1: qb_1,
W_2: qW_2, b_2: qb_2}, data)
W_1: qW_1, b_1: qb_1}, data)
inference.initialize()

# Sample functions from variational model to visualize fits.
Expand All @@ -86,8 +78,8 @@ def neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2):
x = tf.expand_dims(tf.constant(inputs), 1)
mus = []
for s in range(10):
mus += [neural_network(x, qW_0.sample(), qW_1.sample(), qW_2.sample(),
qb_0.sample(), qb_1.sample(), qb_2.sample())]
mus += [neural_network(x, qW_0.sample(), qW_1.sample(),
qb_0.sample(), qb_1.sample())]

mus = tf.pack(mus)

Expand All @@ -109,7 +101,7 @@ def neural_network(x, W_0, W_1, W_2, b_0, b_1, b_2):


# RUN MEAN-FIELD VARIATIONAL INFERENCE
inference.run(n_iter=1000, n_samples=5, n_print=100)
inference.run(n_iter=500, n_samples=5, n_print=100)


# SECOND VISUALIZATION (posterior)
Expand Down

0 comments on commit a99b4c6

Please sign in to comment.