Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
615 changes: 615 additions & 0 deletions notebooks/compare_correlations_algorithms.ipynb

Large diffs are not rendered by default.

37 changes: 19 additions & 18 deletions notebooks/compare_median_filters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,21 +10,20 @@
"\n",
"As described in Adrain & Westerweel, \"Particle image velocimetry\", 2011, J. Westerweel and coauthors created two versions of the median filter. The original one was created in 1994. Its main disadvantage is that it uses one (global) threshold for all the vectors in the velocity field. In 2005, it was proposed to normalize every vector in the velocity field before comparing it to the global threshold, thereby mitigating the disadvantage of the 1994 version of the median filter (see J. Westerweel, F. Scarano, \"Universal outlier detection for PIV data\", Experiments in fluids, 39(6), p.1096-1100, 2005).\n",
"\n",
"OpenPIV has implemented both version of the filter. The 1994 version (the original version) is given by the function `validation.local_median_val()`. The 2005 version (the normalized version) is given by the function `validation.local_norm_median_val()`.\n",
"OpenPIV has implemented both versions of the filter. The 1994 version (the original version) is given by the function `validation.local_median_val()`. The 2005 version (the normalized version) is given by the function `validation.local_norm_median_val()`.\n",
"\n",
"The phylosophy of their usage is the following. Both filters just check every vector in the velocity field and create a \"mask\" of the velocity field where those vector that didn't pass the threshold requirement are marked as NaNs. Then the OpenPIV function `filters.replace_outliers()` must be implemented. That function reads the \"mask\" and replace every NaN vector with the average of its neighbourhood.\n",
"The phylosophy of their usage is the following. Both filters just check every vector in the velocity field and create a \"mask\" of the velocity field where those vectors that didn't pass the threshold requirement are marked as NaNs. Then the OpenPIV function `filters.replace_outliers()` must be implemented. That function reads the \"mask\" and replace every NaN vector with the average of its neighbourhood.\n",
"\n",
"The purpose of this tutorial is compare the two fitlers for a rather difficult case of PIV of a two-phase cap_bubbly air-water flow"
"The purpose of this tutorial is to compare the two fitlers for a rather difficult case of PIV of a two-phase cap-bubbly air-water upward flow."
]
},
{
"cell_type": "code",
"execution_count": 143,
"execution_count": 7,
"id": "5b98dbef",
"metadata": {},
"outputs": [],
"source": [
"import pathlib\n",
"import cv2 # conda install opencv; alternatively use imread() from openpiv tools\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
Expand All @@ -36,12 +35,12 @@
"id": "a4a9c768",
"metadata": {},
"source": [
"Now let's write a function that does basic PIV processing on a pair of PIV frames. The function will find calculate correlations with `pyprocess.fft_correlate_images()`, then it will convert correlations to displacements with `pyprocess.correlation_to_displacement` and, finally, it will validate the velocity field and replace the outliers. It is very imporatant to understand what the right strategy is for the validation and replacement of the outliers. There are two types of validation: validation of the correlations and validation of the velocity field. Validation of the correlations makes sure that the largest correlation peak is much bigger than the rest of the peaks, thereby making sure that the largest correlation peak is not just noice but a valid signal. Once the correlations have been validated this way, one goes ahead and calculates the velocity field. Surprisingly, one can, still, obtain a velocity field with outliers (some vectors are much bigger or point in the direction different than the vectors surrounding it). I.e., correlations validation doesn't give a 100% warranty that the resultant velocity field will be absolutely accurate. So, one goes ahead and validates the velocity field with a median filter which compares every vector to several of its surrounding vectors. Both validation types just mark the invalid vectors. Then the replacement function goes through all the marked vectors and replaces every one of them with the average of its surrounding vectors. Now imagine if the majority of the vectors have been marked as invalid. In this case, the chance of getting invalid vectors in the surroundings of other invalid vectors is big. And we end up with completely \"remodeled\" velocity field where almost every vector has been replaced with the average of its surroundings. I.e., the resultant field is not physical. Obviously, if you choose to do both validations types (which, very well, can be the case), then you have invalid vectors due to correlations and due to outliers. Of course, some of them may be the same, but some of them may not. I.e., you increasing the number of invalid vectors for further replacement procedure. You want to avoid it. In order to avoid it, do validation-replacement in steps. First, validate correlations and replace them at once. Second, validate outliers and replace them at once."
"Now let's write a function that does basic PIV processing on a pair of PIV frames. The function will calculate correlations with `pyprocess.fft_correlate_images()`, then it will convert correlations to displacements with `pyprocess.correlation_to_displacement` and, finally, it will validate the velocity field and replace the outliers. It is very imporatant to understand what the right strategy is for the validation and replacement of the outliers. There are two types of validation: validation of the correlations and validation of the velocity field. Validation of the correlations makes sure that the largest correlation peak is much bigger than the rest of the peaks, thereby making sure that the largest correlation peak is not just noice but a valid signal. Once the correlations have been validated this way, one goes ahead and calculates the velocity field. Surprisingly, one can, still, obtain a velocity field with outliers (some vectors are much bigger or point in the direction different than the vectors surrounding it). I.e., correlations validation doesn't give a 100% warranty that the resultant velocity field will be absolutely accurate. So, one goes ahead and validates the velocity field with a median filter which compares every vector to several of its surrounding vectors. Both validation types just mark the invalid vectors. Then the replacement function goes through all the marked vectors and replaces every one of them with the average of its surrounding vectors. Now imagine if the majority of the vectors have been marked as invalid. In this case, the chance of getting invalid vectors in the surroundings of other invalid vectors is big. And we end up with completely \"remodeled\" velocity field where almost every vector has been replaced with the average of its surroundings. I.e., the resultant field is not physical. Obviously, if you choose to do both validations types (which, very well, can be the case), then you have invalid vectors due to correlations and due to outliers. Of course, some of them may be the same, but some of them may not. I.e., you increasing the number of invalid vectors for further replacement procedure. You want to avoid it. In order to avoid it, do validation-replacement in steps. First, validate correlations and replace them at once. Second, validate outliers and replace them at once."
]
},
{
"cell_type": "code",
"execution_count": 144,
"execution_count": 8,
"id": "8b93f5bb",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -91,6 +90,8 @@
" # Find the correlations map.\n",
" # \"linear\" correlation_method together with normalized_correlation=True\n",
" # helps to boost the s2n threshold from 1.003 to 2 for 95% valid vectors.\n",
" # See OpenPIV Jupiter notebook compare_correlations_algorithms.ipynb for \n",
" # why correlation_method is chosen to be 'linear'.\n",
" corrs = pyprocess.fft_correlate_images(\n",
" pyprocess.moving_window_array(dataArray[0],searchAreaSize,overlap),\n",
" pyprocess.moving_window_array(dataArray[1],searchAreaSize,overlap),\n",
Expand Down Expand Up @@ -246,7 +247,7 @@
" # MASK OUT VELOCITY FIELD.\n",
" # Before saving the field to a .txt file, give zeros to those vectors that lie in the masked regions.\n",
" # Right now, x and y are in the image system of coordinates: x is to the right, y is downwards, (0,0)\n",
" # is in the top left corner. It can be learnt that from the GitHub code of tools.transform_coordinates.\n",
" # is in the top left corner. It can be learnt from the GitHub code of tools.transform_coordinates.\n",
" # Since our x and y are in mm, we're going to use scaling factor to convert them to pix. Then we're going\n",
" # to use them to identify whether or not their place on an example masked image is masked.\n",
" xFlat = x.flatten()\n",
Expand Down Expand Up @@ -280,7 +281,7 @@
},
{
"cell_type": "code",
"execution_count": 145,
"execution_count": 9,
"id": "f8fbe300",
"metadata": {},
"outputs": [],
Expand All @@ -307,7 +308,7 @@
},
{
"cell_type": "code",
"execution_count": 146,
"execution_count": 10,
"id": "a2cc4303",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -357,15 +358,15 @@
},
{
"cell_type": "code",
"execution_count": 147,
"execution_count": 11,
"id": "93f1318e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using normalized 2005 Westerweel's median filter:\n",
"Using normalized 2005 Westerweel's median filter with one run:\n",
"\n",
"\n",
"Percentage of invalid vectors:\n",
Expand Down Expand Up @@ -407,7 +408,7 @@
},
{
"cell_type": "code",
"execution_count": 148,
"execution_count": 12,
"id": "127e6c2f",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -460,7 +461,7 @@
"id": "83d36da1",
"metadata": {},
"source": [
"See how the fields are becoming visually better starting from the classic median filter to the normalized median filter and to the double-ran normalized median filter. If one has several velocity fields in a row (i.e., the time series of velocity fields) and wants to average them (along the time axis), invalid vectors might alter the average and one might need a bigger time series to get a more stable average. On the other hand when all the velocity fields have all the valid vectors, the average becomes more stable (doesn't need as big of a time series). Also, when one analyzes vorted dynamics (i.e., calculates different derivatives within a single velocity field), one should better have all the velocity vectors valid."
"See how the fields are becoming visually better starting from the classic median filter to the normalized median filter and to the double-ran normalized median filter. If one has several velocity fields in a row (i.e., the time series of velocity fields) and wants to average them (along the time axis), outliers might alter the average and one might need a bigger time series to get a more stable average. On the other hand when all the velocity fields don't have outliers, the average becomes more stable (doesn't need as big of a time series). Also, when one analyzes vortex dynamics (i.e., calculates different derivatives within a single velocity field), one should better have no outliers. This is why it is important to filter out the outliers and replace them with something better, like the average of the surrounding vectors."
]
},
{
Expand All @@ -469,13 +470,13 @@
"metadata": {},
"source": [
"### Final note.\n",
"The parameters used in the functin were carefully selected using trials and errors method. The selection of the right parameters can vary greatly: from a couple of hours to a day.\n",
"The parameters used in the function `PIVanalysis()` were carefully selected using trials and errors method. The time it takes to find the right parameters can vary greatly: from a couple of hours to a day.\n",
"\n",
"The filters have many parameters, changing some of them slightly can lead to drastic change in velocity fields. Therefore, one should spend time playing with the parameters in order got get the necessary experience. Go ahead and change the threshold parameter in either one of the normalized median filters runs from the current 2.5 to, say, 1.1 and witness with your own eyes how wrong the resultant velocity field becomes. This effect is explained at the beginning of this notebook.\n",
"\n",
"Then go ahead and play with the rest of the parameters of all the filters as well as with parameters of the outliers replacing functions.\n",
"Then go ahead and play with the rest of the parameters of all the filters as well as with the parameters of the outliers replacing functions. Also, try reordering the validation methods: try doing signal to noise validatin after the median filter and see how that affects the resultant velocity field.\n",
"\n",
"This notebook examplifies the very basic PIV. Of course, for flows like this one more advanced PIV techniques are preferable, such as OpenPIV's windef.py.\n"
"This notebook examplifies the very basic PIV. Of course, for flows like this one more advanced PIV techniques are preferable, such as OpenPIV's `windef.py` which also has normalized median filter implemented.\n"
]
}
],
Expand All @@ -495,7 +496,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
"version": "3.12.5"
},
"papermill": {
"default_parameters": {},
Expand Down