-
Notifications
You must be signed in to change notification settings - Fork 3
Steps to run emulator with MR MPI
You need to create phm_count files for the input to the newEmulator.
By running phmCount-mrmpi-slurm this will create the phm_count files that are needed for the emulator process by a MapReduce process using MR-MPI.
Note you need to run downsample.py to generate down_sample_files based on pileheight records. The path to the pileheight records are in a string located in parser.py that needs to be modified to the location of the current pileheight records.
$ cd mapreduce/mrmpi/src/phm_count-mrmpi
$ sbatch phmCount-mrmpi-slurm
Note: you probably want to edit some of these files to change locations of directories and the number of nodes requested.
You will need to run the newEmulator (on 2048 input files) which is like the old EmulatorFork code, however this is designed to only run on one node and will read/write its input/output to files and not from a database.
$ cd mapreduce/mrmpi/src
$ make
$ cd mapreduce/mrmpi/src/emulatorBatchSlurm
$ python generateBatchEmulatorSlurmScripts.py
$ for f in *slurm; do sbatch $f; done
Note: you probably want to edit some of these files to change locations of directories
Once you have all the output files from newEmulator, you can run MR-MPI code to do the merge of the data
$ cd mapreduce/mrmpi/src/newReduce
$ sbatch emulator-slurm
Note: you probably want to edit some of these files to change locations of directories
Once you have the output files from MR-MPI, you can create the hazard map
$ cd mapreduce/mrmpi/src/hazardMap
$ python final.py
After this has finished, it will create a file called hazard_map in /scratch. From here you should use the view_phm.m MATLAB script to visualize the output.