@@ -274,7 +274,7 @@ mpiexec -n 4 ./main3d.gnu.MPI.ex inputs
274
274
#### Inputs File
275
275
276
276
The following parameters can be set at run-time -- these are currently set in the inputs
277
- file but you can also set them on the command line.
277
+ file but you can also set them on the command line.
278
278
279
279
```
280
280
stop_time = 2.0 # the final time (if we have not exceeded number of steps)
@@ -292,7 +292,7 @@ adv.phierr = 1.01 1.1 1.5 # regridding criteria at each level
292
292
293
293
```
294
294
295
- This inputs file specifies a base grid of 64 x 64 x 8 cells, made up of 16 subgrids each with 16x16x8 cells.
295
+ This inputs file specifies a base grid of 64 x 64 x 8 cells, made up of 16 subgrids each with 16x16x8 cells.
296
296
Here is also where we tell the code to refine based on the magnitude of $$ \phi $$ . We set the
297
297
threshold level by level. If $$ \phi > 1.01 $$ then we want to refine at least once; if $$ \phi > 1.1 $$ we
298
298
want to resolve $$ \phi $$ with two levels of refinement, and if $$ \phi > 1.5 $$ we want even more refinement.
@@ -389,6 +389,11 @@ Try the following:
389
389
results. Also try using the ` inputs ` input file and ` inputs_for_scaling ` input
390
390
file.
391
391
392
+ - Experiment with different inputs options - what happens when you change "adv.do_subcycle?"
393
+ What about "adv.do_reflux" or "adv.phierr"?
394
+
395
+
396
+ {% comment %}
392
397
393
398
<br >
394
399
### Key Observations:
@@ -456,6 +461,9 @@ Try the following:
456
461
not enough work for each rank. In this case, its likely the former.
457
462
</details >
458
463
464
+
465
+ {% endcomment %}
466
+
459
467
<br >
460
468
<br >
461
469
<br >
@@ -526,20 +534,7 @@ The same code that runs on the HPC you can debug on your laptop.
526
534
527
535
### Activity
528
536
529
- - Try running the GPU enabled version and compare runtimes.
530
-
531
- ### Key Observations
532
-
533
- - Running on GPUs did not require changes to the code.
534
-
535
- - Running on GPUs was fast.
536
-
537
- <details >
538
- Running Amr101 with 1 MPI process and 1 GPU took 0.283s.
539
- </details >
540
-
541
-
542
-
537
+ - Try running the GPU enabled version and compare runtimes. How did the runtime compare to the CPU version?
543
538
544
539
{% comment %}
545
540
<!-- subcycling
0 commit comments