Skip to content

Commit

Permalink
Merge pull request #398 from ax3l/doc-cleanupParallelBenchmark
Browse files Browse the repository at this point in the history
Parallel Benchmark: Cleanup
  • Loading branch information
ax3l authored Dec 12, 2018
2 parents 14c7a01 + cc514f5 commit 8980b71
Show file tree
Hide file tree
Showing 10 changed files with 39 additions and 45 deletions.
22 changes: 20 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Changelog

[Summary]

Changes to "0.6.2-alpha"
Changes to "0.6.3-alpha"
^^^^^^^^^^^^^^^^^^^^^^^^

Features
Expand All @@ -27,11 +27,12 @@ Features
- works with Python 3.7 #376
- setup.py for sdist #240
- Backends: JSON support added #384 #393 #338
- Parallel benchmark added #346 #398

Bug Fixes
"""""""""

- support reading series with varying or no iteration padding in filename #388
- spurious MPI C++11 API usage in ParallelIOTest removed #396

Other
"""""
Expand All @@ -44,6 +45,23 @@ Other
- Catch2: separate implementation and tests #399 #400


0.6.3-alpha
-----------
**Date:** 2018-11-12

Reading Varying Iteration Padding Reading

Support reading series with varying iteration padding (or no padding at all) as currently used in PIConGPU.

Changes to "0.6.2-alpha"
^^^^^^^^^^^^^^^^^^^^^^^^

Bug Fixes
"""""""""

- support reading series with varying or no iteration padding in filename #388


0.6.2-alpha
-----------
**Date:** 2018-09-25
Expand Down
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -580,7 +580,7 @@ set(openPMD_EXAMPLE_NAMES
5_write_parallel
6_dump_filebased_series
7_extended_write_serial
8_mpi_benchmark
8_benchmark_parallel
)
set(openPMD_PYTHON_EXAMPLE_NAMES
2_read_serial
Expand Down
1 change: 1 addition & 0 deletions docs/source/utilities/8_benchmark_parallel.cpp
1 change: 0 additions & 1 deletion docs/source/utilities/8_mpi_benchmark.cpp

This file was deleted.

2 changes: 1 addition & 1 deletion docs/source/utilities/benchmark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ root rank will be populated with data, i.e. all ranks' data will be collected in

Example usage

.. literalinclude:: 8_mpi_benchmark.cpp
.. literalinclude:: 8_benchmark_parallel.cpp
19 changes: 6 additions & 13 deletions examples/8_mpi_benchmark.cpp → examples/8_benchmark_parallel.cpp
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
#include <openPMD/openPMD.hpp>
#include <openPMD/Series.hpp>
#include <openPMD/benchmark/mpi/MPIBenchmark.hpp>
#include <openPMD/benchmark/mpi/RandomDatasetFiller.hpp>
#include <openPMD/benchmark/mpi/OneDimensionalBlockSlicer.hpp>

#if openPMD_HAVE_MPI
#include <mpi.h>
# include <mpi.h>
#endif

#include <iostream>
Expand Down Expand Up @@ -100,25 +99,19 @@ int main(
// Take notice that results will be collected into the root rank's report object, the other
// ranks' reports will be empty. The root rank is specified by the first parameter of runBenchmark,
// the default being 0.
auto
res =
auto res =
benchmark.runBenchmark<std::chrono::high_resolution_clock>();

int rank;
MPI_Comm_rank(
MPI_COMM_WORLD,
&rank
);
if (rank == 0)
if( rank == 0 )
{
for (auto
it =
res.durations
.begin();
it !=
res.durations
.end();
it++)
for( auto it = res.durations.begin();
it != res.durations.end();
it++ )
{
auto time = it->second;
std::cout << "on rank " << std::get<res.RANK>(it->first)
Expand Down
9 changes: 2 additions & 7 deletions include/openPMD/benchmark/mpi/BlockSlicer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,6 @@

#pragma once

#if openPMD_HAVE_MPI


#include <mpi.h>
#include "openPMD/Dataset.hpp"


Expand All @@ -41,7 +37,7 @@ namespace openPMD
* Associate the current thread with its cuboid.
* @param totalExtent The total extent of the cuboid.
* @param size The number of threads to be used (not greater than MPI size).
* @param comm MPI communicator.
* @param rank The MPI rank.
* @return A pair of the cuboid's offset and extent.
*/
virtual std::pair<
Expand All @@ -50,8 +46,7 @@ namespace openPMD
> sliceBlock(
Extent & totalExtent,
int size,
MPI_Comm comm
int rank
) = 0;
};
}
#endif
7 changes: 6 additions & 1 deletion include/openPMD/benchmark/mpi/MPIBenchmark.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -292,14 +292,19 @@ namespace openPMD
this->communicator,
&actualSize
);
int rank;
MPI_Comm_rank(
this->communicator,
&rank
);
size = std::min(
size,
actualSize
);
return m_blockSlicer->sliceBlock(
totalExtent,
size,
communicator
rank
);
}

Expand Down
7 changes: 1 addition & 6 deletions include/openPMD/benchmark/mpi/OneDimensionalBlockSlicer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,8 @@

#pragma once

#if openPMD_HAVE_MPI


#include "openPMD/Dataset.hpp"
#include "openPMD/benchmark/mpi/BlockSlicer.hpp"
#include <mpi.h>


namespace openPMD
Expand All @@ -45,8 +41,7 @@ namespace openPMD
> sliceBlock(
Extent & totalExtent,
int size,
MPI_Comm comm
int rank
) override;
};
}
#endif
14 changes: 1 addition & 13 deletions src/benchmark/mpi/OneDimensionalBlockSlicer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,6 @@

namespace openPMD
{
#if openPMD_HAVE_MPI


OneDimensionalBlockSlicer::OneDimensionalBlockSlicer( Extent::value_type dim ) :
m_dim { dim }
{}
Expand All @@ -40,15 +37,9 @@ namespace openPMD
> OneDimensionalBlockSlicer::sliceBlock(
Extent & totalExtent,
int size,
MPI_Comm comm
int rank
)
{
int rank;
MPI_Comm_rank(
comm,
&rank
);

Offset offs(
totalExtent.size( ),
0
Expand Down Expand Up @@ -102,7 +93,4 @@ namespace openPMD
std::move( localExtent )
);
}


#endif
}

0 comments on commit 8980b71

Please sign in to comment.