asset_pricing_sequential.mp4
Source for "Spooky Boundaries at a Distance: Forward-Looking Models with Inductive Bias"
Within a python environment, clone this repository with git and execute pip install -r requirements.txt
.
See more complete instructions below in the detailed installation section.
This solves for the price path
The price path
See the file asset_pricing_sequential_defaults.yaml for the default values.
You can load the Jupyter notebook asset_pricing_sequential.ipynb directly in VS Code or on the command line with jupyter lab
or Google Colab. This notebook provides a simple example of training the asset pricing sequential model and provides utilities to examine the output without using the command line commands.
A note on Google Colab
: This link provides instructions on how to open the notebooks in Colab
You can run the baseline parameters with
python asset_pricing_sequential.py
If you want to override the defaults, here are some examples:
python asset_pricing_sequential.py --seed_everything 101 --trainer.max_epochs=10000
python asset_pricing_sequential.py --trainer.max_time 00:00:01:00
To modify the optimizer and/or learning rate schedulers, you can do things like:
python asset_pricing_sequential.py --trainer.max_epochs=500 --optimizer=torch.optim.Adam --optimizer.lr=0.001 --optimizer.weight_decay=0.00001 --trainer.callbacks.patience=500
python asset_pricing_sequential.py --trainer.max_epochs=1 --optimizer=LBFGS --optimizer.lr=1.0
Modifying the callbacks is similar:
python asset_pricing_sequential.py --trainer.callbacks=TQDMProgressBar --trainer.callbacks.refresh_rate=0
python asset_pricing_sequential.py --trainer.callbacks=LearningRateMonitor --trainer.callbacks.logging_interval=epoch --trainer.callbacks.log_momentum=false
To modify the ML model, you can pass options such as:
python asset_pricing_sequential.py --model.ml_model.init_args.layers=4
python asset_pricing_sequential.py --model.ml_model.init_args.hidden_dim=122 --model.ml_model.init_args.layers=6
You can also modify the activators, which will use the default parameters of their respective classes:
python asset_pricing_sequential.py --model.ml_model.init_args.activator torch.nn.Softplus
python asset_pricing_sequential.py --model.ml_model.init_args.last_activator torch.nn.Tanh
To change economic variables such as the dividend value c, you can try:
python asset_pricing_sequential.py --model.c=0.01
To see all of the available options, run:
python asset_pricing_sequential.py --help
The output of the file will be in something like ./wandb/offline-run-.... You can also view logs online by running:
wandb sync .\wandb\offline-run-...
growth_sequential.mp4
The sequential neoclassical growth model solves for capital path
The capital path
You can load the Jupyter notebook growth_sequential.ipynb directly in VS Code or on the command line with jupyter lab
or Google Colab. This notebook provides a simple example of training the neoclassical sequential model and provides utilities to examine the output without using the command line commands.
A note on Google Colab
: This link provides instructions on how to open the notebooks in Colab
You can run with the baseline parameters using:
python growth_sequential.py
You can pass parameters and modify the optimizer or ML model in a similar way as for asset_pricing_sequential.py.
You can also modify the model parameters. For example, the discount factor beta
:
python growth_sequential.py --model.beta=0.89
Finally, it's possible to change the starting capital level k_0:
python growth_sequential.py --model.k_0=0.7
Additionally, you can run the model with a the convex-concave
production function, which has two steady states. To do so, you need to specify the parameters of the production function: a, b_1, b_2. We recommend running the model with a larger number of epochs
and the ADAM
optimizer like this:
python growth_sequential.py --model.a=0.5 --model.b_1=3.0 --model.b_2=2.5 --trainer.max_epochs=5000 --optimizer=torch.optim.Adam --optimizer.lr=0.001
This instead solves the neoclassical growth model for
where the map
takes z (TFP level) and k (capital) as inputs.
See the file growth_recursive_defaults.yaml for the default values.
You can load the Jupyter notebook growth_recursive.ipynb directly in VS Code or on the command line with jupyter lab
or Google Colab. This notebook provides a simple example of training the recursive neoclassical growth model and provides utilities to examine the output without using the command line commands.
A note on Google Colab
: This link provides instructions on how to open the notebooks in Colab
You can run the baseline parameters using:
python growth_recursive.py
All optimizer and ML options are consistent with growth_sequential. Additionally, you can modify the grid structure. For instance, to increase the number of grid points for the capital grid, you can try:
python growth_recursive.py --model.k_sim_grid_points=24
Same as in the sequential case, you can also run the model with two steady states. However, we recommend running this model with special overlapping capital grids and a separate validation set and RADAM optimizer. We recommend running something like:
python growth_recursive.py --lr_scheduler.class_path=torch.optim.lr_scheduler.StepLR --lr_scheduler.gamma=0.95 --lr_scheduler.step_size=200 --model.a=0.5 --model.b_1=3 --model.b_2=2.5 --model.batch_size=0 --model.k_0=3.3 --model.k_grid_max=25 --model.k_grid_max_2=1.5 --model.k_grid_min=0.4 --model.k_grid_min_2=0.45 --model.k_sim_grid_points=1024 --model.max_T_test=50 --model.ml_model.activator.class_path=torch.nn.ReLU --model.test_loss_success_threshold=0.0001 --model.val_max_1=4.2 --model.val_max_2=1.2 --model.val_min_1=3.1 --model.val_min_2=0.5 --model.val_sim_grid_points=200 --model.vfi_parameters.interpolation_kind=linear --model.vfi_parameters.k_grid_size=1000 --optimizer.class_path=torch.optim.RAdam --optimizer.lr=0.001 --trainer.callbacks.monitor=val_loss --trainer.callbacks.stopping_threshold=5e-06 --trainer.limit_val_batches=5000 --trainer.max_epochs=5000
For solving the model with two steady states, please give special attention to the retcode
values.
One tool for testing the methods with different different hyperparameters and setups is Weights and Biases.
This is a free service for academic use. It provides a dashboard to track experiments and a way to run hyperparameter optimization sweeps.
To use it, first create an account with Weights and Biases, then, assuming you have installed the packages above, ensure you have logged in,
wandb login
Under hpo_sweeps
, you can see the hyperparameter sweep files. If you want to start them, run
wandb sweep replication_scripts/asset_pricing_sequential_g_positive_ensemble.yaml
This will create a new sweep on the server. It will give you a URL to the sweep, which you can open in a browser. You can also see the sweep in your W&B dashboard. You will need the returned ID as well.
This doesn't create any "agents". To do that, take the <sweep_id>
that was returned and run it
wandb agent <sweep_id>
See W&B replication script for asset_pricing_sequential for an example. You can compare the capital and consumption errors depending on the seed.
For users with less experience using python, conda, and VS Code, the following provides more details.
-
Ensure you have installed Python. For example, using Anaconda
-
Recommended but not required: Install VS Code along with its Python Extension
-
Clone this repository
- Make sure you have installed Git
- Recommended: With VS Code, go
<Shift-Control-P>
to open up the commandbar, then chooseGit Clone
, and use the URLhttps://github.com/HighDimensionalEconLab/transversality.git
. That will give you a full environment to work with. - Alternatively, you can clone it with git installed
git clone https://github.com/HighDimensionalEconLab/transversality.git
-
(Optional) create a conda virtual environment
conda create -n transversality python=3.9 conda activate transversality
- Python 3.10 is also broadly supported, but PyTorch doesn't fully support Python 3.11 yet. See Troubleshooting below if Python 3.10 has issues.
-
(Optional) In VS Code, you can then do
<Shift-Control-P>
to open up the commandbar, then choose> Python: Select Interpreter
, and choose the one in thetransversality
environment. Future> Python: Terminal
commands then automatically activate it.- If you are in VS Code, opening a python terminal with
<Shift-Control-P>
then> Python: Terminal
and other terminals should automatically activate the environment and start in the correct location.
- If you are in VS Code, opening a python terminal with
-
Install dependencies. With a terminal in that cloned folder (after, optionally, activating an environment as discussed above).
pip install -r requirements.txt
Troubleshooting:
- If you are having trouble installing packages on Windows with Python 3.10, then either downgrade to 3.9 or see here. To summarize those steps:
- Download https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Local to that folder in a terminal, run
vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools
- If PyTorch is not working after the initial installation, consider installing manually with
conda install pytorch cpuonly -c pytorch
or something similar, and then retrying the dependencies installation. GPUs are not required for these experiments. If you get compatibility clashes between packages with thepip install -r requirements.txt
then we recommend using a virtual environment with conda, as described above.
Deep learning methods use a lot of tuning hyperparameters. A variety of tooling for ML and deep learning is there to help, mostly under the category of "ML DevOps". This includes tools for hyperparameter optimization, model versioning, managing results, model deployment, and running on clusters/clouds. Weights and Biases can also be used for hyperparameter tuning. It provides useful tools such as visualizations of hyperparameter correlation and evaluations.