Skip to content

Commit a718727

Browse files
committed
allow log hyperparameters
1 parent fadf17b commit a718727

File tree

8 files changed

+72
-93
lines changed

8 files changed

+72
-93
lines changed

docs/index.rst

+1
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ User Guide
2222
user/hyper
2323
user/gp
2424
user/kernels
25+
user/modeling
2526
user/solvers
2627

2728

docs/user/kernels.rst

+29-18
Original file line numberDiff line numberDiff line change
@@ -19,33 +19,50 @@ The standard kernels fall into the following categories:
1919
sophisticated models and :ref:`new-kernels` explains how you would go about
2020
incorporating a custom kernel.
2121

22-
**Note:** every kernel takes an optional ``ndim`` keyword that must be set to
23-
the number of input dimensions for your problem.
22+
Common parameters
23+
-----------------
24+
25+
Every kernel accepts the two keyword arguments ``ndim`` and ``axes``. By
26+
default, kernels are only one dimensional so you must specify the ``ndim``
27+
argument if you want the kernel to work with higher dimensional inputs.
28+
By default, higher dimensional kernels are applied to every dimension but you
29+
can restrict the evaluation to a subspace using the ``axes`` argument.
30+
For example, if you have a 3 dimensional input space but you want one of the
31+
kernels to only act in the first dimension, you would do the following:
32+
33+
.. code-block:: python
34+
35+
from george import kernels
36+
kernel = 10.0 * kernels.Matern32Kernel(1.0, ndim=3, axes=0)
37+
38+
Similarly, if you wanted the kernel to act on only the second and third
39+
dimensions, you could do something like:
40+
41+
.. code-block:: python
42+
43+
kernel = 10.0 * kernels.ExpSquaredKernel([1.0, 0.5], ndim=3, axes=[1, 2])
44+
2445
2546
.. _implementation:
2647

27-
Implementation Details
48+
Implementation details
2849
----------------------
2950

3051
It's worth understanding how these kernels are implemented.
3152
Most of the hard work is done at a low level (in C++) and the Python is only a
3253
thin wrapper to this functionality.
3354
This makes the code fast and consistent across interfaces but it also means
34-
that it isn't currently possible to implement new kernel functions efficiently
35-
without recompiling the code.
55+
that it isn't currently possible to implement new kernel functions without
56+
recompiling the code.
3657
Almost every kernel has hyperparameters that you can set to control its
37-
behavior and these can be accessed via the ``pars`` property.
38-
The values in this array are in the same order as you specified them when
39-
initializing the kernel and, in the case of composite kernels (see
40-
:ref:`combining-kernels`) the order goes from left to right.
41-
For example,
58+
behavior and these are controlled using the :ref:`modeling`.
4259

4360
.. code-block:: python
4461
4562
from george import kernels
4663
4764
k = 2.0 * kernels.Matern32Kernel(5.0)
48-
print(k.pars)
65+
print(k.get_vector())
4966
# array([ 2., 5.])
5067
5168
@@ -156,10 +173,4 @@ addition:
156173
Implementing New Kernels
157174
------------------------
158175

159-
Implementing custom kernels in George is a bit of a pain in the current
160-
version. For now, the only way to do it is with the :class:`PythonKernel`
161-
where you provide a Python function that computes the value of the kernel
162-
function at *a single pair of training points*.
163-
164-
.. autoclass:: george.kernels.PythonKernel
165-
:members:
176+
TO DO.

docs/user/modeling.rst

+3-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
.. _modeling:
22

3-
Modeling language
3+
Modeling protocol
44
=================
55

66
In order to make hyperparameter optimization, george comes with a modeling
@@ -142,9 +142,7 @@ implementation would be something like the following:
142142
return grad[self.unfrozen]
143143
144144
def freeze_parameter(self, parameter_name):
145-
names = self.parameter_names
146-
self.unfrozen[names.index(parameter_name)] = False
145+
self.unfrozen[self.parameter_names.index(parameter_name)] = False
147146
148147
def thaw_parameter(self, parameter_name):
149-
names = self.parameter_names
150-
self.unfrozen[names.index(parameter_name)] = True
148+
self.unfrozen[self.parameter_names.index(parameter_name)] = True

george/metrics.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ def __init__(self, metric, ndim=None, axes=None, lower=True):
5151
raise ValueError("invalid (negative) metric")
5252
for i, v in enumerate(metric):
5353
self.parameter_names.append("ln_M_{0}_{0}".format(i))
54-
self.parameters.append(v)
54+
self.parameters.append(np.log(v))
5555

5656
elif len(metric.shape) == 2:
5757
self.metric_type = 2
@@ -83,7 +83,7 @@ def __init__(self, metric, ndim=None, axes=None, lower=True):
8383
else:
8484
self.metric_type = 0
8585
self.parameter_names.append("ln_M_0_0")
86-
self.parameters.append(metric)
86+
self.parameters.append(np.log(metric))
8787

8888
self.parameters = np.array(self.parameters)
8989
self.unfrozen = np.ones_like(self.parameters, dtype=bool)

george/testing/test_kernels.py

+21-61
Original file line numberDiff line numberDiff line change
@@ -47,50 +47,16 @@ def do_kernel_t(kernel, N=20, seed=123, eps=1.32e-6):
4747
"Gradient computation failed in dimension {0}".format(i)
4848

4949

50-
#
51-
# BASIC KERNELS
52-
#
53-
54-
# def test_custom():
55-
# def f(x1, x2, p):
56-
# return np.exp(-0.5 * np.dot(x1, x2) / p[0])
57-
58-
# def g(x1, x2, p):
59-
# arg = 0.5 * np.dot(x1, x2) / p[0]
60-
# return np.exp(-arg) * arg / p[0]
61-
62-
# def wrong_g(x1, x2, p):
63-
# arg = 0.5 * np.dot(x1, x2) / p[0]
64-
# return 10 * np.exp(-arg) * arg / p[0]
65-
66-
# do_kernel_t(kernels.PythonKernel(f, g, pars=[0.5]))
67-
# do_kernel_t(kernels.PythonKernel(f, g, pars=[0.1]))
68-
69-
# try:
70-
# do_kernel_t(kernels.PythonKernel(f, wrong_g, pars=[0.5]))
71-
# except AssertionError:
72-
# pass
73-
# else:
74-
# assert False, "This test should fail"
75-
76-
77-
# def test_custom_numerical():
78-
# def f(x1, x2, p):
79-
# return np.exp(-0.5 * np.dot(x1, x2) / p[0])
80-
# do_kernel_t(kernels.PythonKernel(f, pars=[0.5]))
81-
# do_kernel_t(kernels.PythonKernel(f, pars=[10.0]))
82-
83-
8450
def test_constant():
85-
do_kernel_t(kernels.ConstantKernel(0.1))
86-
do_kernel_t(kernels.ConstantKernel(10.0, 2))
87-
do_kernel_t(kernels.ConstantKernel(5.0, 5))
51+
do_kernel_t(kernels.ConstantKernel(constant=0.1))
52+
do_kernel_t(kernels.ConstantKernel(constant=10.0, ndim=2))
53+
do_kernel_t(kernels.ConstantKernel(constant=5.0, ndim=5))
8854

8955

9056
def test_dot_prod():
9157
do_kernel_t(kernels.DotProductKernel())
92-
do_kernel_t(kernels.DotProductKernel(2))
93-
do_kernel_t(kernels.DotProductKernel(5))
58+
do_kernel_t(kernels.DotProductKernel(ndim=2))
59+
do_kernel_t(kernels.DotProductKernel(ndim=5, axes=0))
9460

9561

9662
#
@@ -145,35 +111,29 @@ def test_matern52():
145111

146112

147113
def test_rational_quadratic():
148-
do_stationary_t(kernels.RationalQuadraticKernel, ln_alpha=np.log(1.0))
149-
do_stationary_t(kernels.RationalQuadraticKernel, ln_alpha=np.log(0.1))
150-
do_stationary_t(kernels.RationalQuadraticKernel, ln_alpha=np.log(10.0))
114+
do_stationary_t(kernels.RationalQuadraticKernel, alpha=1.0)
115+
do_stationary_t(kernels.RationalQuadraticKernel, alpha=0.1)
116+
do_stationary_t(kernels.RationalQuadraticKernel, alpha=10.0)
151117

152118

153-
#
154-
# PERIODIC KERNELS
155-
#
156-
157119
def test_cosine():
158-
do_kernel_t(kernels.CosineKernel(1.0))
159-
do_kernel_t(kernels.CosineKernel(0.5, ndim=2))
160-
do_kernel_t(kernels.CosineKernel(0.5, ndim=2, axes=1))
161-
do_kernel_t(kernels.CosineKernel(0.75, ndim=5, axes=[2, 3]))
120+
do_kernel_t(kernels.CosineKernel(period=1.0))
121+
do_kernel_t(kernels.CosineKernel(period=0.5, ndim=2))
122+
do_kernel_t(kernels.CosineKernel(period=0.5, ndim=2, axes=1))
123+
do_kernel_t(kernels.CosineKernel(period=0.75, ndim=5, axes=[2, 3]))
162124

163125

164126
def test_exp_sine2():
165-
do_kernel_t(kernels.ExpSine2Kernel(0.4, 1.0))
166-
do_kernel_t(kernels.ExpSine2Kernel(12., 0.5, ndim=2))
167-
do_kernel_t(kernels.ExpSine2Kernel(17., 0.5, ndim=2, axes=1))
168-
do_kernel_t(kernels.ExpSine2Kernel(13.7, -0.75, ndim=5, axes=[2, 3]))
169-
do_kernel_t(kernels.ExpSine2Kernel(-0.7, 0.75, ndim=5, axes=[2, 3]))
170-
do_kernel_t(kernels.ExpSine2Kernel(-10, 0.75))
171-
127+
do_kernel_t(kernels.ExpSine2Kernel(gamma=0.4, period=1.0))
128+
do_kernel_t(kernels.ExpSine2Kernel(gamma=12., period=0.5, ndim=2))
129+
do_kernel_t(kernels.ExpSine2Kernel(gamma=17., period=0.5, ndim=2, axes=1))
130+
do_kernel_t(kernels.ExpSine2Kernel(gamma=13.7, ln_period=-0.75, ndim=5,
131+
axes=[2, 3]))
132+
do_kernel_t(kernels.ExpSine2Kernel(gamma=-0.7, period=0.75, ndim=5,
133+
axes=[2, 3]))
134+
do_kernel_t(kernels.ExpSine2Kernel(gamma=-10, period=0.75))
172135

173-
#
174-
# COMBINING KERNELS
175-
#
176136

177137
def test_combine():
178-
do_kernel_t(12 * kernels.ExpSine2Kernel(0.4, 1.0, ndim=5) + 0.1)
138+
do_kernel_t(12 * kernels.ExpSine2Kernel(gamma=0.4, period=1.0, ndim=5))
179139
do_kernel_t(12 * kernels.ExpSquaredKernel(0.4, ndim=3) + 0.1)

kernels/Constant.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ doc: |
1111
1212
where :math:`c` is a parameter.
1313
14-
:param lnconstant:
14+
:param ln_constant:
1515
The logarithm of the constant value :math:`c` in the above equation.
1616
1717
value: |

setup.py

+2
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,8 @@ def build_extension(self, ext):
170170
if len(kernel_specs):
171171
print("Compiling kernels")
172172
compile_kernels(kernel_specs)
173+
if "kernels" in sys.argv:
174+
sys.exit()
173175

174176
# Check for the Cython source (development mode) and compile it if it
175177
# exists.

templates/kernels.py

+13-6
Original file line numberDiff line numberDiff line change
@@ -101,15 +101,15 @@ def __len__(self):
101101

102102
def __add__(self, b):
103103
if not hasattr(b, "is_kernel"):
104-
return Sum(ConstantKernel(np.log(float(b)), ndim=self.ndim), self)
104+
return Sum(ConstantKernel(constant=float(b), ndim=self.ndim), self)
105105
return Sum(self, b)
106106

107107
def __radd__(self, b):
108108
return self.__add__(b)
109109

110110
def __mul__(self, b):
111111
if not hasattr(b, "is_kernel"):
112-
return Product(ConstantKernel(np.log(float(b)), ndim=self.ndim),
112+
return Product(ConstantKernel(constant=float(b), ndim=self.ndim),
113113
self)
114114
return Product(self, b)
115115

@@ -281,6 +281,8 @@ class {{ spec.name }} (Kernel):
281281

282282
def __init__(self,
283283
{% for p in spec.params %}{{ p }}=None,
284+
{%- if p.startswith("ln_") %}
285+
{{ p[3:] }}=None,{% endif %}
284286
{% endfor -%}
285287
{% if spec.stationary -%}
286288
metric=None,
@@ -292,11 +294,16 @@ def __init__(self,
292294
self._unfrozen = np.ones({{ spec.params|length }}, dtype=bool)
293295

294296
{% for p in spec.params -%}
295-
if {{ p }} is None:
296-
raise ValueError("missing required parameter '{{ p }}'")
297+
if {{ p }} is None{% if p.startswith("ln_") %} and {{ p[3:] }} is None{% endif %}:
298+
raise ValueError("missing required parameter '{{ p }}'{% if p.startswith("ln_") %} or '{{ p[3:] }}'{% endif %}")
299+
{%- if p.startswith("ln_") %}
300+
elif {{ p }} is None:
301+
if {{ p[3:] }} <= 0.0:
302+
raise ValueError("invalid parameter '{{ p[3:] }} <= 0.0'")
303+
{{ p }} = np.log({{ p[3:] }})
304+
{%- endif %}
297305
self.{{ p }} = {{ p }}
298-
{% endfor -%}
299-
306+
{% endfor %}
300307
{% if spec.stationary -%}
301308
if metric is None:
302309
raise ValueError("missing required parameter 'metric'")

0 commit comments

Comments
 (0)