-
Notifications
You must be signed in to change notification settings - Fork 4
WIP v1 release preparation #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Start to add docs * Start to add type checking * Remove analyze.py
* Add ParallelDenseExecutor which is parallel over k,w points * Move petsc4py solver to its own submodule. That way, people who wish to run MPI jobs without petsc can still import a parallel module without the import error.
added saving of G and exec times to disk
and generated comparisons to ground truth for petsc and serial dense (cont frac)
to collect G from last rank but staying on rank 0 changed getRank/getSize syntax in base.py (not affecting other executors)
|
@Chiffafox hey Stepan! I have overhauled the entire API to be much more user friendly. Everything in At this point, I'm going to take a shot at the executors. Primarily, I'm going to implement a "global syntax" for checkpoint-restart and object serialization (so far a lot is MSONable, we'll make it pickle-able too). Just wanted to keep you in the loop on this. Honestly, we could start writing a computational paper on this if you want... what do you think? Edit: oh, and the framework for 2 and 3 dimensions is in place, e.g., the 2 and 3 dimensional |
|
Hey Matt,
I think I accidentally did not reply to your actual email, but to some
GitHub notifications service (not sure if this reached you -- if it did,
sorry for the double email and no rush in responding!). See below for my
original response.
…-------- Forwarded Message --------
Subject: Re: [x94carbone/GGCE] WIP v1 release preparation (PR #41)
Date: Mon, 13 Jun 2022 12:04:28 -0700
From: Stepan Fomichev ***@***.***>
To: Matthew Carbone ***@***.***>
Hey Matt! This sounds excellent, thank you for all your work on this!
Checkpoint/restart + serialization should be a great way to enable
larger computations once it is combined with PETSc's ability to split
across multiple nodes. My last thing I want to do on this front is to
clean up the double parallelization over MPI, and together with all your
work that would be an excellent combo for release. I guess now that you
finished with the engine module, I can go do that using the new syntax.
I am down to start writing -- how do you want to structure / organize this?
Sincerely,
Stepan
On 6/11/2022 3:35 PM, Matthew Carbone wrote:
@Chiffafox <https://github.com/Chiffafox> hey Stepan! I have
overhauled the entire API to be much more user friendly. Everything in
|models.py| and the entire |engine| module are done. I've more or less
compared the basis constructions to the previous version of the code
in your branch (|new_lorentzfits|).
At this point, I'm going to take a shot at the executors. Primarily,
I'm going to implement a "global syntax" for checkpoint-restart and
object serialization (so far a lot is MSONable, we'll make it
pickle-able too). Just wanted to keep you in the loop on this.
Honestly, we could start writing a computational paper on this if you
want... what do you think?
—
Reply to this email directly, view it on GitHub
<#41 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADFXYBYGGAVL5FQO7RS6LTLVOUIDDANCNFSM5TT36KIQ>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
Hey @Chiffafox, sounds good. Yeah if it's no different to you definitely branch off of |
|
@Chiffafox I have implemented the new versions of the solvers. They should be MPI-compatible by default, with like 10x less code. Still have to implement the dispersion stuff but there may even be a way to parallelize that too. Let me know what you think! I think the next thing to do is add the checkpoint/restart to the solvers objects, then finally to re-implement your PETSc accelerators. Then we're done with the code, basically! |
Finite temperature code was having some issues. Before this fix, the code was only working properly for when there were single phonon types. Now everything is properly parsed for more than one phonon type.
|
I think 97f9d42 should fix the finite-T issues with H+P but we should definitely test it. |
Complete, backwards-incompatible overhaul of the code. I will be
loguru