Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add config_parser in trainer_config_helpers to seperate trainer config #1

Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
102 commits
Select commit Hold shift + click to select a range
529f24c
cpu cmrnorm
hedaoyuan Dec 12, 2016
9503590
add CrossMapNormal
hedaoyuan Dec 13, 2016
e357f27
add GPU CrossMapNormal
hedaoyuan Dec 13, 2016
0eac399
priorbox layer for ssd
Dec 13, 2016
39d689e
Format the priorbox code
Dec 14, 2016
438a704
add rnn_cn.md
Dec 14, 2016
9600932
Add fake gpu support of the priorbox layer for the moment
Dec 14, 2016
c007608
Format the python file.
Dec 14, 2016
a1d2abc
add Function
hedaoyuan Dec 14, 2016
ce1d98e
Add a Tensor to use as a Function argument
hedaoyuan Dec 15, 2016
214343a
modify details
Dec 15, 2016
660b310
modify line183 beam search
Dec 15, 2016
4ebb3eb
imporve Function
hedaoyuan Dec 15, 2016
9171ab0
Merge branch 'develop' of https://github.com/baidu/Paddle into cmrnorm
hedaoyuan Dec 15, 2016
707a9c9
Fix variable name and add the annotation
Dec 15, 2016
520342e
Fix code format
Dec 15, 2016
d2d0010
add CrossMapNormalGradFunc
hedaoyuan Dec 15, 2016
22a5e47
move Function to function dir
hedaoyuan Dec 15, 2016
558e869
add CMakeLists
hedaoyuan Dec 15, 2016
d11e2b4
Remove some useless code
hedaoyuan Dec 15, 2016
f13aeb5
fix swig_api
hedaoyuan Dec 15, 2016
1048aee
Add input layer check
Dec 15, 2016
cee9346
add some comments
hedaoyuan Dec 15, 2016
5222b58
support UBUNTU MIRROR and modify doc
wen-bo-yang Dec 16, 2016
2b91bf1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wen-bo-yang Dec 16, 2016
5b746fb
modify doc doc/getstarted/build_and_install/docker_install_en.rst
wen-bo-yang Dec 16, 2016
36af605
modify doc
wen-bo-yang Dec 16, 2016
9f990d9
Add unittest of the priorbox layer
Dec 16, 2016
8d9f675
Add header files
Dec 16, 2016
d40bb72
modify rnn_config_cn.rst
Dec 16, 2016
cad325f
Add header file
Dec 16, 2016
38723e7
remove random flag
Dec 19, 2016
7dfe3bd
remove gpu memory alloc
Dec 19, 2016
148bd4d
add Layer::createFunction
hedaoyuan Dec 19, 2016
1a06697
travis for check broken links
luotao1 Dec 19, 2016
706c572
Matrix API refactor, when passing parameters, convert shared_ptr (Mat…
Dec 16, 2016
4fbf949
Refactor MUL functions, pass object reference instead of shared_ptr.
Dec 20, 2016
4855821
Merge branch 'develop' into checker
luotao1 Dec 20, 2016
204152c
set -e for docs.sh
luotao1 Dec 20, 2016
8f08fa1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
wen-bo-yang Dec 20, 2016
bf26679
update docker_install_en.rst
wen-bo-yang Dec 20, 2016
6f8f468
Add priorbox layer gpu unit test.
Dec 20, 2016
5fddd99
move TEST from test_matrixCompare.cpp to cross_map_normal_op_test.cpp
hedaoyuan Dec 20, 2016
bf32411
Merge branch 'develop' of https://github.com/baidu/Paddle into cmrnorm
hedaoyuan Dec 20, 2016
f1a94e3
follow comments
hedaoyuan Dec 20, 2016
f4f0f2d
Fix bug in config_parse.py when batch_norm layer is used in Recurrent…
qingqing01 Dec 20, 2016
35bbb4f
change float to real in two test
Dec 20, 2016
dadd48a
Merge pull request #963 from reyoung/feature/add_const_in_parameter_u…
reyoung Dec 20, 2016
9049369
Merge pull request #934 from tianbingsz/paddle_function_mat
tianbingsz Dec 20, 2016
42e1217
Merge pull request #854 from hedaoyuan/cmrnorm
tianbingsz Dec 20, 2016
37f7595
Merge pull request #927 from wen-bo-yang/develop_test
wangkuiyi Dec 20, 2016
84ad724
Adding namespace in timing macros
Dec 21, 2016
5bb29ec
close log info in BN.
qingqing01 Dec 21, 2016
a6f772b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Dec 21, 2016
67fcd89
fix array style problem
Dec 21, 2016
5471e87
Merge branch 'develop' into checker
luotao1 Dec 21, 2016
de8927e
refine docs.sh
luotao1 Dec 21, 2016
f202929
Change type float to real.
Dec 21, 2016
1b8e151
Support user specified label input in tests
Dec 21, 2016
06ea2bf
Merge pull request #967 from pengli09/fix_test_type
pengli09 Dec 21, 2016
39a5477
refine docs.sh
luotao1 Dec 21, 2016
4e34220
Merge pull request #970 from reyoung/feature/clean_parameter_updater_…
reyoung Dec 21, 2016
e4c492d
change type to bool.
qingqing01 Dec 21, 2016
567871f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Dec 21, 2016
446e3c2
Merge pull request #946 from luotao1/checker
luotao1 Dec 21, 2016
8bd4752
Merge branch 'develop' into add_label_seq_pos_to_inputdef
Dec 21, 2016
1f4f044
A tiny fix in PyDataProvider2
reyoung Dec 21, 2016
bbf3b47
Merge pull request #966 from qingqing01/batch_norm
qingqing01 Dec 21, 2016
adc5839
Merge pull request #969 from reyoung/feature/clean_gradient_machine_s…
reyoung Dec 21, 2016
d09564b
change std::vector<int> to const reference
Dec 21, 2016
cf5bf5b
Merge branch 'feature/fix_param_hidden_in_pydp2' into feature/mnist_t…
reyoung Dec 21, 2016
28c5010
Merge pull request #976 from pengli09/add_label_seq_pos_to_inputdef
pengli09 Dec 21, 2016
22aacbf
Add const to GradientMachine::eval
reyoung Dec 21, 2016
4d81b36
A tiny fix in PyDataProvider2
reyoung Dec 21, 2016
4d5a0b0
Also add const to makeEvaluator
reyoung Dec 21, 2016
1e6c87b
Merge branch 'feature/add_const_in_gradient_machine_eval' into featur…
reyoung Dec 21, 2016
e8e58fb
add config_parser in trainer_config_helpers to seperate trainer config
jacquesqiao Dec 21, 2016
8d24931
Change member variables from public to protected
Dec 21, 2016
eaba2e2
Expose Evaluator API
reyoung Dec 21, 2016
409a577
Complete a very simple mnist demo.
reyoung Dec 21, 2016
06dc66b
Merge branch 'feature/fix_param_hidden_in_pydp2' into feature/mnist_t…
reyoung Dec 21, 2016
b53bdcd
Merge pull request #867 from Noplz/ssd
qingqing01 Dec 22, 2016
e031f0c
Fix typo in PyDataProvider2.py
Dec 22, 2016
9baf7fc
Fix data provider bug in srl demo
Dec 22, 2016
c1b294a
Merge pull request #974 from emailweixu/timer_namespace
backyes Dec 22, 2016
db82a0e
Merge pull request #980 from reyoung/feature/add_const_in_gradient_ma…
gangliao Dec 22, 2016
89bf2e4
Change float to real in NormLayer.h
Dec 22, 2016
680dd92
Add AverageOptimizer, Add save parameter
reyoung Dec 22, 2016
c6b2bfe
Merge pull request #986 from pengli09/fix-pydata-provider-doc-typo
pengli09 Dec 22, 2016
a7b5d94
Merge pull request #987 from pengli09/fix-srl-demo-data-provider-bug
pengli09 Dec 22, 2016
4f70880
Merge pull request #881 from livc/rnn
Zrachel Dec 22, 2016
5bca268
Add gitignore
reyoung Dec 22, 2016
4490bf9
Merge pull request #990 from pengli09/norm-layer
qingqing01 Dec 22, 2016
59009ba
Always use copy method for numpy.
reyoung Dec 22, 2016
a31ef0c
Merge branch 'feature/mnist_train_api' of github.com:reyoung/Paddle i…
reyoung Dec 22, 2016
f06b64f
Test GPU
reyoung Dec 22, 2016
65e957c
Merge branch 'feature/mnist_train_api' of github.com:reyoung/Paddle i…
reyoung Dec 22, 2016
5a68584
Test on GPU
reyoung Dec 22, 2016
16ea66e
Merge branch 'develop' of github.com:baidu/Paddle into feature/mnist_…
reyoung Dec 22, 2016
3a80272
Add comments.
reyoung Dec 22, 2016
87d4e60
add config_parser_utils
jacquesqiao Dec 22, 2016
7029247
add config_parser_utils
jacquesqiao Dec 22, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add config_parser in trainer_config_helpers to seperate trainer config
  • Loading branch information
jacquesqiao committed Dec 21, 2016
commit e8e58fb0674afe7ef600ccea18ff9e8887e1ffd1
28 changes: 23 additions & 5 deletions demo/mnist/api_train.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,25 @@
import py_paddle.swig_paddle as api
import paddle.trainer.config_parser
import numpy as np

import paddle.trainer_config_helpers.config_parser as config_parser
from paddle.trainer_config_helpers import *


def optimizer_config():
settings(
learning_rate=1e-4, learning_method=AdamOptimizer(), batch_size=1000)


def network_config():
imgs = data_layer(name='pixel', size=784)
hidden1 = fc_layer(input=imgs, size=200)
hidden2 = fc_layer(input=hidden1, size=200)
inference = fc_layer(input=hidden2, size=10, act=SoftmaxActivation())
cost = classification_cost(
input=inference, label=data_layer(
name='label', size=10))
outputs(cost)


def init_parameter(network):
assert isinstance(network, api.GradientMachine)
Expand All @@ -15,15 +33,15 @@ def init_parameter(network):

def main():
api.initPaddle("-use_gpu=false", "-trainer_count=4") # use 4 cpu cores
config = paddle.trainer.config_parser.parse_config(
'simple_mnist_network.py', '')

opt_config = api.OptimizationConfig.createFromProto(config.opt_config)
opt_config_proto = config_parser.parse_optimizer_config(optimizer_config)
opt_config = api.OptimizationConfig.createFromProto(opt_config_proto)
_temp_optimizer_ = api.ParameterOptimizer.create(opt_config)
enable_types = _temp_optimizer_.getParameterTypes()

model_config = config_parser.parse_network_config(network_config)
m = api.GradientMachine.createFromConfigProto(
config.model_config, api.CREATE_MODE_NORMAL, enable_types)
model_config, api.CREATE_MODE_NORMAL, enable_types)
assert isinstance(m, api.GradientMachine)
init_parameter(network=m)

Expand Down
16 changes: 0 additions & 16 deletions demo/mnist/simple_mnist_network.py

This file was deleted.

70 changes: 36 additions & 34 deletions python/paddle/trainer/config_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -3387,8 +3387,35 @@ def register_parse_config_hook(f):
_parse_config_hooks.add(f)


def parse_config(config_file, config_arg_str):
def update_g_config():
'''
Update g_config after execute config_file or config_functions.
'''
for k, v in settings.iteritems():
if v is None:
continue
g_config.opt_config.__setattr__(k, v)

for k, v in trainer_settings.iteritems():
if v is None:
continue
g_config.__setattr__(k, v)

for name in g_config.model_config.input_layer_names:
assert name in g_layer_map, \
'input name "%s" does not correspond to a layer name' % name
assert (g_layer_map[name].type == "data" or g_layer_map[name].type == "data_trim"), \
'The type of input layer "%s" is not "data"' % name
for name in g_config.model_config.output_layer_names:
assert name in g_layer_map, \
'input name "%s" does not correspond to a layer name' % name
return g_config


def parse_config(trainer_config, config_arg_str):
'''
@param trainer_config: can be a string of config file name or a function name
with config logic
@param config_arg_str: a string of the form var1=val1,var2=val2. It will be
passed to config script as a dictionary CONFIG_ARGS
'''
Expand Down Expand Up @@ -3422,45 +3449,20 @@ def parse_config(config_file, config_arg_str):
g_root_submodel.is_recurrent_layer_group = False
g_current_submodel = g_root_submodel

# for paddle on spark, need support non-file config.
# you can use parse_config like below:
#
# from paddle.trainer.config_parser import parse_config
# def configs():
# #your paddle config code, which is same as config file.
#
# config = parse_config(configs, "is_predict=1")
# # then you get config proto object.
if hasattr(config_file, '__call__'):
config_file.func_globals.update(
if hasattr(trainer_config, '__call__'):
trainer_config.func_globals.update(
make_config_environment("", config_args))
config_file()
trainer_config()
else:
execfile(config_file, make_config_environment(config_file, config_args))
for k, v in settings.iteritems():
if v is None:
continue
g_config.opt_config.__setattr__(k, v)

for k, v in trainer_settings.iteritems():
if v is None:
continue
g_config.__setattr__(k, v)
execfile(trainer_config,
make_config_environment(trainer_config, config_args))

for name in g_config.model_config.input_layer_names:
assert name in g_layer_map, \
'input name "%s" does not correspond to a layer name' % name
assert (g_layer_map[name].type == "data" or g_layer_map[name].type == "data_trim"), \
'The type of input layer "%s" is not "data"' % name
for name in g_config.model_config.output_layer_names:
assert name in g_layer_map, \
'input name "%s" does not correspond to a layer name' % name
return g_config
return update_g_config()


def parse_config_and_serialize(config_file, config_arg_str):
def parse_config_and_serialize(trainer_config, config_arg_str):
try:
config = parse_config(config_file, config_arg_str)
config = parse_config(trainer_config, config_arg_str)
#logger.info(config)
return config.SerializeToString()
except:
Expand Down
1 change: 1 addition & 0 deletions python/paddle/trainer_config_helpers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from networks import *
from optimizers import *
from attrs import *
from config_parser import *

# This will enable operator overload for LayerOutput
import math as layer_math
38 changes: 38 additions & 0 deletions python/paddle/trainer_config_helpers/config_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

换一个文件名吧?这个文件名感觉已经被使用过了。

类似于config_parser_utils.py就好了

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import paddle.trainer.config_parser as config_parser
'''
This file is a wrapper of formal config_parser. The main idea of this file is to
separete different config logic into different function, such as network configuration
and optimizer configuration.
'''

__all__ = [
"parse_trainer_config", "parse_network_config", "parse_optimizer_config"
]


def parse_trainer_config(trainer_conf, config_arg_str):
return config_parser.parse_config(trainer_conf, config_arg_str)


def parse_network_config(network_conf):
config = config_parser.parse_config(network_conf, '')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个config_arg_str还是给暴露出来吧。可以把默认参数设置成''。即

def parse_network_config(network_conf, config_arg_str=None):
    if config_arg_str is None:
        config_arg_str = ''
    ...

这个参数比较有用。

return config.model_config


def parse_optimizer_config(optimizer_conf):
config = config_parser.parse_config(optimizer_conf, '')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

return config.opt_config