-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Line ratios tools #69
base: master
Are you sure you want to change the base?
Conversation
@odstrcilt I made these scripts a long time ago for line ratio studies, and showed them to the groups in Greifswald and Garching, but did not hear whether it's being used or not. It's quite simple, but has some decent generality. Would you mind having a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not tested it. There is just one bug, on line 193. It will calculate uncertainty properly only for two lines. And than I have included several suggestions for improving the code.
# uncertainty on ratio with first signal | ||
unc0 = np.sqrt((1/vals[1])**2*uncs[0]**2 + (vals[0]/vals[1]**2)**2*uncs[1]**2) | ||
# metric: chi^2 | ||
res += np.abs((val/vals[0] - tec_fun(ne_cm3, Te_eV)/tec0)/unc0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is here np.abs? it should return a vector of residuals.
res = 0 | ||
for val,unc,tec_fun in zip(vals[1:],uncs[1:],tec_funs[1:]): | ||
# uncertainty on ratio with first signal | ||
unc0 = np.sqrt((1/vals[1])**2*uncs[0]**2 + (vals[0]/vals[1]**2)**2*uncs[1]**2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it shoudl be
unc0 = np.sqrt((1/val)**2*uncs[0]**2 + (vals[0]/val**2)**2*unc**2)
or simplified
unc0 = vals[0]/vals * np.hypot(uncs[0]/vals[0], unc/vals)
|
||
def min_fun(x, vals, uncs, tec_funs): | ||
'''Minimization helper, allowing an arbitrary number of signals to be given. | ||
All are normalized to the first (0th) signal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But what happens if the first signal is so weak that it is just a random noise around zero? However, the information can still be extracted from the other signals.
One option is to add one extra free parameter for the minimization describing the scale. This number will be multiplied with tec_fun(ne_cm3, Te_eV) output and then compared with the data by calculating chi2.
Other option is to find the scaling factor analytically:
scale = sum(data * model/err *2)/sum(model**2/err**2)
chi2 = sum((data -model*scale)**2/err**2)
and the number of degrees of freedom used for uncertainty estimate will lower by one
# bounds in order mins, maxs | ||
if fixed_ne_cm3 is None: | ||
# optimize for ne and Te | ||
res_lsq = optimize.least_squares(lambda x: min_fun(x, vals, uncs, tec_funs), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function curve_fit is better for this purpose. It calculates the covariance matrix directly. And it can use pseudoinvesion, if it is singular.
tol = np.finfo(float).eps*s[0]*max(res_lsq.jac.shape) | ||
w = s > tol | ||
cov = (Vh[w].T/s[w]**2) @ Vh[w] # robust covariance matrix | ||
perr = np.sqrt(np.diag(cov)) # 1 sigma uncertainty on fitted parameters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could also use np.linalg.pinv
# normalize all signals to mean value of first line signal | ||
data_norm = copy.deepcopy(data) | ||
line0 = list(data.keys())[0] | ||
norm_val = np.nanmean(data[line0]['val']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was not used
include_rec = False if atom_data is None else True | ||
|
||
# normalize all signals to mean value of first line signal | ||
data_norm = copy.deepcopy(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data were normalised only inside of the min_fun
|
||
# fractional abundances at ionization equilibrium | ||
atom_data = atomic.get_atom_data(imp, ["scd", "acd"]) | ||
_Te, fz = atomic.get_frac_abundances( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the code here should be possible to replace by two calls of class tec
class
General tools to examine spectroscopic line ratios using ADAS data. All the new relevant code is in a single file, lr_tools.py. The implementation resembles the one described by S. Henderson's NII line ratio papers. The code maintains a good degree of generality by asking users to specify as input for each line to be considered (a) the ADAS "ISEL" indices for excitation and recombination components (specific to a chosen ADF15 file), (b) values, and (c) uncertainties of line intensities. An arbitrary number of ISEL numbers can be considered/summed for each experimental line shape; more than one transition can be included by providing additional ISEL numbers within the "list of lists" of the 'isel' sub-dictionary for a given line. If no recombination component should be included, users should indicate "None" in the second element of each sub-list.
Example: evaluate ne and Te from the ratio of the NII line intensities at 4041A and 3995A, considering experimental uncertainties.
Interested parties: @wintersv19 , F. Henke (github username?)
@ Victoria, Frederik: could you please try this out and let me know if everything works well for you, before we merge? Requests and suggestions for changes would be certainly appreciated!