Skip to content

alamj/awcmPostProcessing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scale Adaptive Large Eddy Simulation (SALES)

The SALES framework has been implemented over libMesh+PETSc and OpenFOAM. The libMesh-version is based on wavelet-based higher order discretization. The OpenFOAM-version is based on second-order finite volume discretization, which peserves the skew-symmetry of Navier-Stokes equation.

SALES adopts three principles: i) implicit filtering of Navier-Stokes equation should preserve the skew-symmetry of nonlinear differential operator ii) adopt an explicit filter to construct the structural for of subgrid-scale stress, and iii) find an optimal eddy viscosity that follows Kolmogorov refined similarity hypothesis.

The following document aims to help post-processing the results of OpenFOAM-based SALES methodology.

A) Parallel Reconstruction and Sampling

Reconstruction of a decomposed mesh data to a single processor is not practical for a mesh over several hundred millions of grid points. The idea considered here is to parallel reconstruction via MPI tools, which creates a *.h5 file containing flow data and *.xdmf file containing META data.

Paraview can load the xdmf file. Using xtensor and MPI, the .h5 can be further processed in parallel as needed. Moreover, .h5 can be processed in numpy. This is an advantage over traditional reconstruction to single processor.

awcmviewer is a tool to read the entire flow field decomposed over many processors without reconstructing to a single processor. atmosphericLESpost is a tool that reads saved data from a decomposed case and generate new fields

Generate new field from an existing decomposed case

srun /project/def-alamj/shared/bin/v2306/atmosphericLESpost meshSetup -mode generate -time 350 -fields (Rij Gij Lij Lambda Prod) -filter no -parallel

This command will create Rij (SGS stress tensor), Gij (velocity gradient tensor), Lij (Leonard stress tensor), Lambda (lambda2 criteria), Prod (production term).

"-filter no" tells not to apply a filter to create Gij "Rij" will be saved as Rsgs, Ksgs, and Esgs "Prod" will be saved as Prod (production term) and sPrd (shear production term)

Reconstruct 3D flow field as xdmf + h5 format

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis xdmf -fields 'U UPrime2Mean UMean Q Vort' -time 3600 -out fowOUT -pwd

META data will be saved as CASE_DIR/statistics/xdmf/fowOUT.xdmf Flow data will be saved as CASE_DIR/statistics/xdmf/fowOUT.h5

Extract data on a line joining pointA to pointB

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis line -fields 'U UMean' -time 36 -out fow -pwd -pointA '10 130 0' -pointB '10 130 960'

The output of the command will be saved in CASE_DIR/statistics/xdmf/fow.h5, where points on the line and fields will be available.

Extract data on a plane and save as xdmf+h5 format

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis slice -out fowSLC.h5 -pointA '0 0.5 0.1245' -pointB '0 0 1' -time 350 -fields 'U UPrime2Mean Rsgs'

This command will read U, UPrime2Mean, Rsgs from the 350 time folder and creat a slice on a plane that contains pointA and has a normal of pointB. The result will be saved as CASE_DIR/statistics/xdmf/fowSLC.h5 and CASE_DIR/statistics/xdmf/fowSLC.xdmf.

Aggregate fields on planes normal to pointA

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis aggregate -out fowAVG.h5 -pointA '0 0 1' -fields 'U UPrime2Mean TKE' -hdf5 fowDATA.h5

This command will read CASE_DIR/statistics/xdmf/fowDATA.h5, average the listed fields to create mean vertical profile, and save in CASE_DIR/statistics/xdmf/fowAVG.h5; here, fowDATA.h5 is created by awcmviewer through xdmf option

postABlpar

This utility will convert OpenFOAM data into hdf5 format.

Run the following to see a list fields, which can be extracted. /project/def-alamj/shared/bin/v2306/postABLpar info

Following is a list of available fields: C, U, UMean, Gij, Tij, Sij, Rij, Lij, vorticity, enstrophy, strain, skewness, stretching, P, pMean

C: cell centers Gij: Velocity Gradient Sij: strain tensor Rij: subgrid scale stress tensor Lij: Leonard stress tensor strain: S:S, doublly contracted product of strain, magnitude of strain skewness: trace(S^3), known as strain skewness stretching: wSw, vortex stretching

Run the following to extract desired fields: srun /project/def-alamj/shared/bin/v2306/postABLpar hdf5.txt -time 150 -parallel

hdf5.txt contains a list of options, fields indicate the name of fields.

To get Leonard stress, we need to filter the velocity according to constant/turbulenceProperties. Use filter yes; in hdf5.txt

The result will be strored in statistics/hdf5/output_index.h5

Run the following to get the list of fields in h5 file h5dump -A statistics/hdf5/output_index.h5

Use h5py to read the data and generate statistics. This tool is for statistical analysis only. For visualization purpose, further coding is needed to convert the data into xdmf format.

RUN python code

Using TigerVNC, bring the standard compute environment.

On the same terminal, load python as [ module load python/3.10 ]

On the same terminal, load numpy etc as [ module load scipy-stack ]

Using the same terminal, create a virtual environment [ virtualenv --no-download ~/ENV ]

Activate the virtual environment, [ source ~/ENV/bin/activate ]

You can run a python script as [ python your_script.py ]

Deactivate the environment as [deactivate]

AWCMViewer - Probes -> HDF5 Quickstart

Probes are fixed sensors placed in the computational domain. Each sensor collects time series of select fields at every time step of the entire run of the model. Due to Langrangian view of fluid flows and Taylor's frozen turbulence hypothesis, each sensor is affected by eddies passing through multiple sensor. In datascience view, it is crucial to project the sensed flow onto a low-dimensional vector space. SALES framework has developed a SURE-WT algorithm for extracting coherent structures from the probe data. Some relevant tools are indicated below, which is a work in-progress.

An example: how to read probe data and convert to HDF5 format.

Raw data is located in case directory under postProcessing/probes/time/fields, where fields maybe U, UPrime2Mean, UMean, etc. We want to save time, probe coordinates, and time series of fields within the case directory as statistics/xdmf/fowf15mwR0.h5. In the following command, post_probes.inp maybe an empty file or the sample file (post_process.inp) available in the case directory.

/project/def-alamj/shared/bin/v2306/awcmviewer -INP post_probes.inp -analysis probes -fields 'U UPrime2Mean UMean' -time 0 -out fowf15mwR0 -pwd

/project/def-alamj/shared/bin/v2306/awcmviewer -INP post_probes.inp -analysis probes -fields 'U UPrime2Mean UMean' -time 0 -out fowf15mwR0 -pwd -pod 1

The last option [-pod value] indicates to apply a POD filter. If value = 0, only the strongest mode is used. If value = 1, no POD filtering is used. Default value is 1.

The HDF5 data can be processed with python (h5py), Matlab, or C++ xtensor. Some helper python code is given in awcmviewerutils.py. The idea is to provide the desried probe coordinate and extract the time series for plotting purposes.

Below is an example of how to read timeseries of streamwise velocity (t, Ux) at probe (1680, 1320, 150):

path = "/scratch/alamj/WindFarms/fowf15mwR0/statistics/xdmf/fowf15mwR0.h5"

t, u = av.get_time_series(out, [1680, 1320, 150], 3, 0, "U") # get Ux

t, u = av.get_time_series(out, [1680, 1320, 150], 6, 0, "UPrime2Mean") # get xx component of Reynolds stress

t, u = av.get_time_series(out, [1680, 1320, 150], 6, 1, "UPrime2Mean") # xy

t, u = av.get_time_series(out, [1680, 1320, 150], 6, 2, "UPrime2Mean") # xz

t, u = av.get_time_series(out, [1680, 1320, 150], 6, 3, "UPrime2Mean") # yy

Alternatively, use the following example:

loc = [1680, 1320, 150]

t = av.fetch_time_series(out, loc, 0, "time")

u = av.fetch_time_series(out, loc, 0, "U")

u = av.fetch_time_series(out, loc, 1, "UPrime2Mean")



Sampling on a plane

awcmviewer can extract flow field on a 2D plane and save as xdmf + h5. Note, XDMF/HDF5 is a highly efficient data structure to deal with big-data problems using MPI tools. When the simulation is done over a several millions grid points, it better to submit batch jobs and then plot the HDF5 data, which may be more efficient to loading the entire data into paraview. The batch job can be submitted from compute nodes, and plotted from vdi-nodes.

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis line -fields 'U UPrime2Mean UMean Q Vort' -time 3600 -out fowf15mwR0 -pwd -pointA '90 1320 150' -pointB '0 0 1' > slice.out

here, pointA is a reference point and pointB is a unit vector, which defines a plane over which the flow is extracted.

Reconstruct 3D flow field

Use the following command to save as xdmf + h5 fomat:

srun /project/def-alamj/shared/bin/v2306/awcmviewer -INP post_process.inp -analysis xdmf -fields 'U UPrime2Mean UMean Q Vort' -time 3600 -out fowf15mwR0 -pwd > xdmf.out

XDMF/HDF5 will create a text file, containing META data and .h5 file containing the actual data. This these files are large, further post-processing can be done using e.g. c++ xtensor tool with MPI framework.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages