-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the marginal effect or the constant marginal effect #76
Comments
Hi @juandavidgutier, g_computer = GComputation(x, t, y)
estimate_causal_effect!(g_computer)
summarize(g_computer, inference=true) I think this would definitely be feasible for the next release, though. As for marginal effects, I know this is very straightforward for something like a logistic regression, but I'm not exactly sure how you would do it with multiple models, e.g. when using double machine learning or a metalearner. Do you have any references for this? I'm definitely open to it and I think it would be good to calculate marginal effects in the summarize method. |
Hi @dscolby, Unfortunately, I am not an expert in programming in Julia, but an option to modify the method to estimate confidence intervals could be to follow the documentation of the Python package EconML. In this case, (and as I understand it) the procedure could be as follows: For Confidence Intervals (details at: https://econml.azurewebsites.net/_modules/econml/inference/_bootstrap.html#BootstrapEstimator)
For Marginal Effect, you could see the following documentation of EconML: |
@juandavidgutier Bootstrapping runs into the same performance issues as randomization/permutation inference. I considered bootstrapping inference but ultimately went with randomization inference because it answers a slightly different question than bootstrapping. Bootstrapping is telling us what the probability of seeing an effect at least as extreme as the estimated effect is from some theoretical (normal) distribution. But randomization inference is telling us the proportion of times we would see an effect at least as extreme as the estimated effect under different treatment assignment mechanisms. Either way, I think I'll need to work on getting it parallelized, which is what EconML does. But getting the p-value is definitely feasible, so I'll work on that as I have time. For the marginal effect, I was originally thinking about taking derivatives with estimators like R-learners that have multiple models, which would be tough, especially since each estimator is different. But it seems like other packages that use simpler estimators just use the finite difference approximation, which should also be pretty straightforward to implement. So, Il'' also get to work on this for the next release, but it will probably be slow going because I have a lot on my plate right now. But again, thanks for the suggestions and references. |
Hi @dscolby you are right, both methods of randomization inference and bootstrapping have the same performance issues. Thanks for listening to the suggestion. |
Hi @dscolby
I work on implementing causal learning in eco-epidemiology, and I recently discovered CausalELM. I see important features in the package, such as the computation of G-computation and the E-value. However, it would be amazing if you could add the marginal effect or the constant marginal effect, along with its confidence interval.
The text was updated successfully, but these errors were encountered: