Skip to content

Latest commit

 

History

History

workflows

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Workflows

Workflows are an important aspect for a wide range of different crystal plasticity simulations. These workflows can range from a wide range of different things from model parameterization for new materials, connecting microstructural effects to local macroscopic properties, fatigue studies, studying effects on a material's yielding point given varying initial conditions, and the list goes on and on. Therefore, we are providing examples of workflows we have created for ExaConstit that others in the material modelling community might find useful.

Additive Manufacturing Workflow

This workflow is found under the Stage3 subdirectory and is based on the workflows that can be found described in https://doi.org/10.1177/10943420211042558 . It involves taking a microstructure(s) that has been constructed from a code such as ExaCA, https://github.com/LLNL/ExaCA , and then using that to run a ton of different loading conditions in order to fit an anisotropic yield surface. It includes an example in the Stage3/pre_main_post_script/chal_prob_full.py of how ExaConstit was used to drive the Stage 3 component within a full machine Frontier run in regards to a UQ pipeline that the ExaAM project was interested in.

Material Parameter Optimizations

This workflow is found under the optimization subdirectory. This optimization framework was created under a collaboration between Professor Mike Sangid's group at Purdue University and LLNL. I would specifically like to note that @Leonidas-Z did a lot of the initial work here as noted in the squash merge commit: 9cb6ebc9282e405b147314065c1684d617f7a1ae . It makes use of genetic algorithms provided through the Python DEAP package to optimize various crystal plasticity models based on the provided experimental / simulated observables such as macroscopic stress-strain curves. One can have multiple objectives driving the optimization script through the use of the U-NSGA-III algorithm.

In order to make use of the U-NSGA-III algorithm, we've created a specific fork of the DEAP package to introduce this algorithm which is found at https://github.com/rcarson3/deap . After cloning the repo, you can do a simple python setup.py install to install the deap package that includes our needed changes in it.

The optimization script is driven by the ExaConstit_NSGA3.py script, which does require some editing to provide bounds for the material model parameters, location of the experimental observations, location of exaconstit binary, and a number of different things. One can modify the templated values of the options file by changing the values of the test_dataframe object in the script. The templated values in the master_options.toml file are noted by the %%template_name%% type pattern. So, the user has a lot of flexibility on what they want to modify for the optimization script especially given that these values can change per each optimization objective. The material models that are being optimized for can be changed through the ExaConstit_MatGen.py file. When running the script, a number of log files are generated which allows one to monitor progress of the solves and also note which generation and gene had the best solution. Additionally while running the script, each generation and gene has its own directory in which runs are conducted in so one can go back and see how the optimization script has progressed over time. Finally given optimizations can take a while, a checkpoint file is generated every n iterations which allows one to restart the optimization script if a job allocation ran out of time for instance.

Runtime selections

So, this workflow can currently make use of either a serial or parallel execution of MPI jobs. We accomplish this by defining a map_custom function in a *_map.py folder that allows us to swap out different backends with no changes to our main code. Currently, we only have 2 different abstractions available to users. The normal_map.py file provides a serial execution of the MPI jobs which might be fine for runnings the optimization scripts on a desktop. However given the parallel nature of each generation, we can do better. So, we make use of Flux (https://flux-framework.org/) to parallelize our MPI jobs on a number of different systems using a single API system. This Flux backend is available through the flux_map.py file, and jobs for it can be tuned further through a template flux job file called master_flux.py. Additional backends are welcome as long as they follow the same abstraction defined earlier.