-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failing example_scripts/run_simulation.py last step #445
Comments
Hi @Takadonet , can you tell me which world file are you using so I can try to recreate it? Is it the default created by |
Yes with default world created with |
I see, when running in parallel the "world" is divided into subdomains, and each cpu takes care of simulating one of these domains. The summaries are then stored for each subdomain and combined at the end. What you are seeing is an edge case where in a subdomain there are no deaths recorded since it is too small or not enough time has passed. When it then tries to combine the summaries it fails since at least one of them is empty, as you've noticed. As a temporary solution, you could try running with fewer cpus. When you run a larger world with maybe a higher infection rate this should not happen. I just tested it with 6 cpus and it runs fine, how many cpus are you using? |
42 :) |
Yes, please go ahead! |
Thanks for merging my changes and removing unused dependencies! |
When running run-simulation.py in single thread or with mpirun, getting a failure when it appears to aggregate the results.
`Traceback (most recent call last):
File "/locationJUNE/example_scripts/run_simulation.py", line 180, in
combine_records(save_path)
File "/location/lib/python3.9/site-packages/june-1.1.2-py3.9.egg/june/records/records_writer.py", line 386, in combine_records
combine_summaries(
File "/location/lib/python3.9/site-packages/june-1.1.2-py3.9.egg/june/records/records_writer.py", line 337, in combine_summaries
df = df.groupby(["region", "time_stamp"], as_index=False).agg(aggregator)
File "/location/lib/python3.9/site-packages/pandas/core/groupby/generic.py", line 945, in aggregate
result, how = aggregate(self, func, *args, **kwargs)
File "/location/lib/python3.9/site-packages/pandas/core/aggregation.py", line 582, in aggregate
return agg_dict_like(obj, arg, _axis), True
File "/location/lib/python3.9/site-packages/pandas/core/aggregation.py", line 768, in agg_dict_like
results = {key: obj._gotitem(key, ndim=1).agg(how) for key, how in arg.items()}
File "/location/lib/python3.9/site-packages/pandas/core/aggregation.py", line 768, in
results = {key: obj._gotitem(key, ndim=1).agg(how) for key, how in arg.items()}
File "/location/lib/python3.9/site-packages/pandas/core/groupby/generic.py", line 253, in aggregate
return getattr(self, cyfunc)()
File "/location/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1496, in mean
return self._cython_agg_general(
File "/location/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1081, in _cython_agg_general
raise DataError("No numeric types to aggregate")
pandas.core.base.DataError: No numeric types to aggregate`
See multiple summary.##.csv produced (when running mpirun) and a few only have header files with no row data. Believe that is the reason why it is not working. Is this an edge case not taken into account or do I mess something else when running the test script?
The text was updated successfully, but these errors were encountered: