-
Notifications
You must be signed in to change notification settings - Fork 8
Closed
Description
In process_mth5.py there is a note that says to do this review.
Basically, there is the following snippet of code being executed after the pipeline to create the mt_metadata TF object:
tf_cls = export_tf(
tf_collection,
station_metadata_dict=station_metadata.to_dict(),
survey_dict=survey_dict
)
The tf_collection is an aurora data structure that tracks the TF values, per decimation level. These TFs can be made out of many runs.
So, we need to make sure that the TF knows which runs were used to generate it
in the current code, we are doing this:
local_run_obj = dataset_df["run"].iloc[0]
station_metadata = local_run_obj.station_group.metadata
station_metadata._runs = []
run_metadata = local_run_obj.metadata
station_metadata.add_run(run_metadata)
What this means is that only the first run is being scraped for metadata here. But it looks like there is facility for adding the other run metadata.
Here is a comment from the code in this area:
# There is a container that can handle storage of multiple runs in xml, Anna made something like this.
# N.B. Currently, only the last run makes it into the tf object,
# but we can simply iterate of the run list here, getting run metadata
# station_metadata.add_run(run_metadata)
So I will try implementing this iterator.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels