Skip to content

Commit 8c231e5

Browse files
committed
Made reading filters more explicit and added documentation
1 parent 1b541d4 commit 8c231e5

File tree

2 files changed

+92
-32
lines changed

2 files changed

+92
-32
lines changed

README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,8 @@
3535
- [Custom star formation histories](#custom-star-formation-histories)
3636
- [Using priors on the infrared luminosity](#using-priors-on-the-infrared-luminosity)
3737
- [Better treatment of spectra](#better-treatment-of-spectra)
38+
- [Additional documentation](#additional-documentation)
39+
- [Adding new filters](#adding-new-filters)
3840

3941
<!-- /MarkdownTOC -->
4042

@@ -413,3 +415,45 @@ Spectrum file (FAST++ format):
413415
In this example the information in both catalogs in the same. But the new syntax allows more possibilities, for example adaptive binning, or combining spectra from different instruments (or passbands) with large gaps in between or different spectral resolutions. The ```tr``` (transmission) column, which is just a binary "use/don't use" flag becomes useless since the grid does not need to be uniform anymore.
414416

415417
Even when using the old format, the treatment of these spectrum files is also more correct in FAST++. The ```bin``` column correctly combines multiple data points into a single measurement (using inverse variance weighting) rather than simply using the first value of a bin (why this is implemented in this way in FAST-IDL, I do not know). The order of the columns in the file do not matter, while FAST-IDL assumes a fixed format (but does not tell you).
418+
419+
420+
# Additional documentation
421+
422+
## Adding new filters
423+
424+
For compatibility reasons, the filter database format is the same as that of EAzY and FAST. This is an ASCII/text file which contains all the filters, listed one after the other. The format must be:
425+
```
426+
num_pts [optional extra information...]
427+
id lam trans
428+
id lam trans
429+
id lam trans
430+
...
431+
num_pts [optional extra information...]
432+
id lam trans
433+
id lam trans
434+
id lam trans
435+
...
436+
```
437+
438+
In this example ```num_pts``` must be the number of data points in the filter response curve. Then for each data point, ```id``` is the identifier of that point (unused), ```lam``` is the wavelength in Angstrom, and ```trans``` is the filter transmission at that wavelength.
439+
440+
The overall normalization factor of the filter transmission does not matter, as the filters are always automatically re-normalized to unit integral before being used in the fit. If ```FILTERS_FORMAT=0```, the integral of ```lam*trans``` is normalized to unity, and if ```FILTERS_FORMAT=1``` then the integral of ```lam^2*trans``` is set to unity. This is exactly the same behavior as in FAST and EAzY.
441+
442+
To add new filters, simply append them to the end of the ```FILTER.RES``` file following the format above. For example, if your filter has 11 data points you would write:
443+
```
444+
...
445+
11 my custom favorite filter (lambda0=5000 A)
446+
1 4000.0 0.00
447+
2 4200.0 0.10
448+
3 4400.0 0.20
449+
4 4600.0 0.40
450+
5 4800.0 0.50
451+
6 5000.0 0.55
452+
7 5200.0 0.60
453+
8 5400.0 0.40
454+
9 5600.0 0.20
455+
10 5800.0 0.10
456+
11 6000.0 0.00
457+
```
458+
459+
The wavelength step of the tabulated filter does not need to be constant, and the filter is assumed to have zero transmission below the minimum and above the maximum tabulated wavelength.

src/fast++-read_input.cpp

Lines changed: 48 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -480,45 +480,61 @@ bool read_filters(const options_t& opts, input_state_t& state) {
480480
if (line.empty()) continue;
481481

482482
vec1s spl = split_any_of(line, " \t\n\r");
483-
if (spl.size() > 3) {
484-
// Start of a new filter
483+
uint_t npts;
484+
if (!from_string(spl[0], npts)) {
485+
warning("could not understand l.", l, " in ", opts.filters_res);
486+
continue;
487+
}
485488

486-
// Save the previous one, if any
487-
if (!filt.wl.empty()) {
488-
state.filters.push_back(filt);
489+
// Start of a new filter
489490

490-
// Cleanup for next filter
491-
filt.wl.clear();
492-
filt.tr.clear();
493-
filt.id = npos;
494-
}
491+
// Save the previous one, if any
492+
if (!filt.wl.empty()) {
493+
state.filters.push_back(filt);
495494

496-
++ntotfilt;
497-
filt.id = ntotfilt;
495+
// Cleanup for next filter
496+
filt.wl.clear();
497+
filt.tr.clear();
498+
filt.id = npos;
499+
}
498500

499-
// Determine if this filter is used in the catalog
500-
vec1u idused = where(state.no_filt == filt.id);
501-
if (!idused.empty()) {
502-
// It is there, keep the ID aside for later sorting
503-
append(idcat, idused);
504-
append(idfil, replicate(state.filters.size(), idused.size()));
505-
doread = true;
506-
} else {
507-
// If not, discard its values
508-
doread = false;
509-
}
501+
++ntotfilt;
502+
filt.id = ntotfilt;
510503

511-
} else if (doread && spl.size() == 3) {
512-
// Reading the filter response
513-
float wl, tr;
514-
if (!from_string(spl[1], wl) || !from_string(spl[2], tr)) {
515-
error("could not parse values from line ", l);
516-
note("reading '", opts.filters_res, "'");
517-
return false;
504+
// Determine if this filter is used in the catalog
505+
vec1u idused = where(state.no_filt == filt.id);
506+
if (!idused.empty()) {
507+
// It is there, keep the ID aside for later sorting
508+
append(idcat, idused);
509+
append(idfil, replicate(state.filters.size(), idused.size()));
510+
doread = true;
511+
} else {
512+
// If not, discard its values
513+
doread = false;
514+
}
515+
516+
uint_t lcnt = 0;
517+
while (lcnt < npts && std::getline(in, line)) {
518+
++l;
519+
520+
if (line.empty()) continue;
521+
522+
if (doread) {
523+
// Reading the filter response line by line
524+
spl = split_any_of(line, " \t\n\r");
525+
float wl, tr;
526+
if (!from_string(spl[1], wl) || !from_string(spl[2], tr)) {
527+
error("could not parse values from line ", l);
528+
note("reading '", opts.filters_res, "'");
529+
return false;
530+
}
531+
532+
filt.wl.push_back(wl);
533+
filt.tr.push_back(tr);
518534
}
519535

520-
filt.wl.push_back(wl);
521-
filt.tr.push_back(tr);
536+
537+
++lcnt;
522538
}
523539
}
524540

0 commit comments

Comments
 (0)