You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Jared,
Thanks for providing deepsurv. I have a question about hyperparameter serach. I am not sure what is the benefit of using docker oputunity vs simply giving a list of parameters options in the parameter variable as shown in DeepSurv_example.ipynb:
You can use GPU without docker. Usually, for any matrix or vector computations, NumPy uses base C-API if none of the high-level libraries are integrated with it. You can configure CUDA drivers to increase computational speed (if your GPU is compatible with CUDA) or manually build the OpenBLAS or any other libraries and link them to Theano in the environment variables of the system.
Hi Jared,
Thanks for providing deepsurv. I have a question about hyperparameter serach. I am not sure what is the benefit of using docker oputunity vs simply giving a list of parameters options in the parameter variable as shown in DeepSurv_example.ipynb:
hyperparams = {
'L2_reg': 10.0,
'batch_norm': True,
'dropout': 0.4,
'hidden_layers_sizes': [25, 25],
'learning_rate': 1e-05,
'lr_decay': 0.001,
'momentum': 0.9,
'n_in': train_data['x'].shape[1],
'standardize': True
}
For example, you can specify learning rate as [0.1,0.001,0.0001] here.
In addition, is it ok to use multiple cores for parallel running using your script?
Much appreciated!
The text was updated successfully, but these errors were encountered: