Skip to content

Commit b23f169

Browse files
committed
Reworked reading filters from database to be more like FAST and EAzY.
Also added documentation for adding new filters.
1 parent e763bf7 commit b23f169

File tree

2 files changed

+92
-32
lines changed

2 files changed

+92
-32
lines changed

README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@
3131
- [Monte Carlo simulations](#monte-carlo-simulations)
3232
- [Controlling the cache](#controlling-the-cache)
3333
- [Better treatment of spectra](#better-treatment-of-spectra)
34+
- [Additional documentation](#additional-documentation)
35+
- [Adding new filters](#adding-new-filters)
3436

3537
<!-- /MarkdownTOC -->
3638

@@ -262,3 +264,45 @@ Spectrum file (FAST++ format):
262264
In this example the information in both catalogs in the same. But the new syntax allows more possibilities, for example adaptive binning, or combining spectra from different instruments (or passbands) with large gaps in between or different spectral resolutions. The ```tr``` (transmission) column, which is just a binary "use/don't use" flag becomes useless since the grid does not need to be uniform anymore.
263265

264266
Even when using the old format, the treatment of these spectrum files is also more correct in FAST++. The ```bin``` column correctly combines multiple data points into a single measurement (using inverse variance weighting) rather than simply using the first value of a bin (why this is implemented in this way in FAST-IDL, I do not know). The order of the columns in the file do not matter, while FAST-IDL assumes a fixed format (but does not tell you).
267+
268+
269+
# Additional documentation
270+
271+
## Adding new filters
272+
273+
For compatibility reasons, the filter database format is the same as that of EAzY and FAST. This is an ASCII/text file which contains all the filters, listed one after the other. The format must be:
274+
```
275+
num_pts [optional extra information...]
276+
id lam trans
277+
id lam trans
278+
id lam trans
279+
...
280+
num_pts [optional extra information...]
281+
id lam trans
282+
id lam trans
283+
id lam trans
284+
...
285+
```
286+
287+
In this example ```num_pts``` must be the number of data points in the filter response curve. Then for each data point, ```id``` is the identifier of that point. This ID is unused by FAST++, but it is recommended to make it start at one and increment by one for each data point (consistency checks may be implemented in the future). Then, ```lam``` is the wavelength in Angstrom, and ```trans``` is the filter transmission at that wavelength.
288+
289+
The overall normalization factor of the filter transmission does not matter, as the filters are automatically re-normalized to unit integral before the fit. If ```FILTERS_FORMAT=0```, the integral of ```lam*trans``` is normalized to unity, and if ```FILTERS_FORMAT=1``` then the integral of ```lam^2*trans``` is set to unity. The flux in a band is then computed as the integral of ```lam*trans*flx``` (for ```FILTERS_FORMAT=0```) or ```lam^2*trans*flx``` (for ```FILTERS_FORMAT=1```), where ```flx``` is the flux density (```Slambda```) in ```erg/s/cm^2/A```. This is exactly the same behavior as in FAST and EAzY.
290+
291+
To add new filters, simply append them to the end of the ```FILTER.RES``` file following the format above. For example, if your filter has 11 data points you would write:
292+
```
293+
...
294+
11 my custom favorite filter (lambda0=5000 A)
295+
1 4000.0 0.00
296+
2 4200.0 0.10
297+
3 4400.0 0.20
298+
4 4600.0 0.40
299+
5 4800.0 0.50
300+
6 5000.0 0.55
301+
7 5200.0 0.60
302+
8 5400.0 0.40
303+
9 5600.0 0.20
304+
10 5800.0 0.10
305+
11 6000.0 0.00
306+
```
307+
308+
The wavelength step of the tabulated filter does not need to be constant, and the filter is assumed to have zero transmission below the minimum and above the maximum tabulated wavelength.

src/fast++-read_input.cpp

Lines changed: 48 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -287,45 +287,61 @@ bool read_filters(const options_t& opts, input_state_t& state) {
287287
if (line.empty()) continue;
288288

289289
vec1s spl = split_any_of(line, " \t\n\r");
290-
if (spl.size() > 3) {
291-
// Start of a new filter
290+
uint_t npts;
291+
if (!from_string(spl[0], npts)) {
292+
warning("could not understand l.", l, " in ", opts.filters_res);
293+
continue;
294+
}
292295

293-
// Save the previous one, if any
294-
if (!filt.wl.empty()) {
295-
state.filters.push_back(filt);
296+
// Start of a new filter
296297

297-
// Cleanup for next filter
298-
filt.wl.clear();
299-
filt.tr.clear();
300-
filt.id = npos;
301-
}
298+
// Save the previous one, if any
299+
if (!filt.wl.empty()) {
300+
state.filters.push_back(filt);
302301

303-
++ntotfilt;
304-
filt.id = ntotfilt;
302+
// Cleanup for next filter
303+
filt.wl.clear();
304+
filt.tr.clear();
305+
filt.id = npos;
306+
}
305307

306-
// Determine if this filter is used in the catalog
307-
vec1u idused = where(state.no_filt == filt.id);
308-
if (!idused.empty()) {
309-
// It is there, keep the ID aside for later sorting
310-
append(idcat, idused);
311-
append(idfil, replicate(state.filters.size(), idused.size()));
312-
doread = true;
313-
} else {
314-
// If not, discard its values
315-
doread = false;
316-
}
308+
++ntotfilt;
309+
filt.id = ntotfilt;
317310

318-
} else if (doread && spl.size() == 3) {
319-
// Reading the filter response
320-
float wl, tr;
321-
if (!from_string(spl[1], wl) || !from_string(spl[2], tr)) {
322-
error("could not parse values from line ", l);
323-
note("reading '", opts.filters_res, "'");
324-
return false;
311+
// Determine if this filter is used in the catalog
312+
vec1u idused = where(state.no_filt == filt.id);
313+
if (!idused.empty()) {
314+
// It is there, keep the ID aside for later sorting
315+
append(idcat, idused);
316+
append(idfil, replicate(state.filters.size(), idused.size()));
317+
doread = true;
318+
} else {
319+
// If not, discard its values
320+
doread = false;
321+
}
322+
323+
uint_t lcnt = 0;
324+
while (lcnt < npts && std::getline(in, line)) {
325+
++l;
326+
327+
if (line.empty()) continue;
328+
329+
if (doread) {
330+
// Reading the filter response line by line
331+
spl = split_any_of(line, " \t\n\r");
332+
float wl, tr;
333+
if (!from_string(spl[1], wl) || !from_string(spl[2], tr)) {
334+
error("could not parse values from line ", l);
335+
note("reading '", opts.filters_res, "'");
336+
return false;
337+
}
338+
339+
filt.wl.push_back(wl);
340+
filt.tr.push_back(tr);
325341
}
326342

327-
filt.wl.push_back(wl);
328-
filt.tr.push_back(tr);
343+
344+
++lcnt;
329345
}
330346
}
331347

0 commit comments

Comments
 (0)