BatchedInverseProblem
with asynchronous, device-aware forward_map
+ concatenation
#232
Labels
BatchedInverseProblem
with asynchronous, device-aware forward_map
+ concatenation
#232
We need a way to run batches of forward simulations asynchronously, and then to concatenate the forward maps deterministically for
EnsembleKalmanInversion
. One way to do this is to develop aBatchedInverseProblem
that consists ofInverseProblems
, each either their ownobservations
andsimulation
;forward_map
/inverting_forward_map
from each individualInverseProblem
to pass toEnsembleKalmanInversion
.inverting_forward_map
asynchronously: https://docs.julialang.org/en/v1/manual/asynchronous-programming/We probably also want to make
inverting_forward_map
"device aware", so that we can run simulations on different GPUs on the same node (for example). This won't be hard, since it's just a matter of "switching" to the appropriate device before running any GPU code. We can copy data to the CPU inFieldTimeSeriesCollector
while the simulations are running, so none of the rest of the code needs to care about this. See CUDA.jl docs or here: https://juliagpu.org/post/2020-07-18-cuda_1.3/index.html.All of this is relatively simple to implement in that it won't take many lines of code once we know what to write.
The text was updated successfully, but these errors were encountered: