-
Notifications
You must be signed in to change notification settings - Fork 3
added .travis.yml, codecov.yml, and badges to README.md file #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… (which is probably not needed for this project)
Thank you very much! The valgrind seems useful to find subtle bugs due to illegal memory access. Actually the Lanczos algorithm has some parts which might cause off-by-one error. I think the valgrind would also be helpful for my own quantum many-body calculation (I've faced very complex indexing failure many times!) I'll rewrite the files to make them match GTest. |
Excellent! requestPlease send me a pull request containing your "lambda_lanczos.hpp" and "lambda_lanczos_util.hpp" files. Details:Please go to your lammps repository and copy your files into the src/USER-MISC subdirectory. Please make a pull request using my fork of the LAMMPS repository located at: Please select the "new_matrix_lib" fork. questionIt seems there are 2 different ways to find the smallest eigenvalue with LambdaLanczos:
I understand method #1 Cheers |
Thank you for your comment! Simple answer
About
|
if(this->find_maximum) { | |
ev = find_maximum_eigenvalue(alpha, beta); | |
} else { | |
ev = find_minimum_eigenvalue(alpha, beta); | |
} |
Lanczos algorthm converts the given matrix to a smaller (k by k) tridiagonal matrix. Then the eigenvalues {a_1, a_2, ... , a_k} of the tridiagonal matrix can be calculated by binary search for so some specific sequence (Strum sequence). The
find_maximum
controls whether a_1 or a_k to be calculated; that is independent of matrix elements.
The eigenvalue_offset
just shifts eigenvalues. That means if we have a matrix whose eigenvalues are {1, 2, 3} and set eigenvalue_offset
to -10, the matrix which is used in the algorithm is {-9, -8, -7}. And if we set find_maximum
to true, calculated eigenvalue will be -7, NOT -9.
The reason why eigenvalue_offset
is necessary is stated here. But now I'm doubting that the eigenvalue_offset
is truly necessary (see below).
eigenvalue_offset
mystery
For a long time, I've believed the eigenvalue_offset
is necessary because power iteration method (a similar algorithm to the Lanczos method) requires the offset and articles I read mentioned the Lanczos method also does. But recently, discussion with my colleagues concluded that the linear subspace in which the tridiagonal matrix is defined (Krylov subspace) does NOT depend on eigenvalue_offset
; which implies it makes (at least mathematically) no changes for calculation (it seems still curious to me).
However we couldn't say anything about improvement of numerical efficiency and stability by the eigenvalue_offset
. In addition, the eigenvalue shift can certainly remove eigenvalues which is close to zero (such ones cause instability). So currently I recommend for safety to set eigenvalue_offset
to make the smallest eigenvalue have maximum absolute value.
The above description may be little confusing. Additional questions are welcome!
(About LAMMPS)
Couldn't I fork original LAMMPS repository (lammps/lammps) and send pull request to your one (jewettaij/lammps)? I tried to do so, but your repository didn't appear in "base repository" section of "Pull request". Is it okay to let my repository be direct fork of your one?
Thank you for your detailed reply. Unfortunately, this means I have more questions. (See below.) question: To find the maximum eigenvalue, use find_maximum=true, eigenvalue_offset=0.0question: To find the minimum eigenvalue, use find_maximum=false, eigenvalue_offset=-Σ_ij |Aij|question: To find the maximum |eigenvalue| (absolute value), use eigenvalue_offset=0.0, and use invoke run() twice with find_maximum=true and find_maximum=false.Is this correct?
I'm sorry for this suggestion. You are right. My suggestion is not possible. For some reason, github does not allow people to send pull requests to forks belonging to other people (unless you forked their fork). It seems like you successfully submitted a pull request to Jacob Gissinger's fork of LAMMPS. EDIT Optional: More questions...If you have limited time, feel free to skip the remaining questions.
I admit that today I did not know what the Lanczos method actually does. (I just wanted to borrow some code.) Today I finally read the wikipedia article on the Lanczos algorithm. Now I realize that your version of the Lanczos method is less general than the original Lanczos method. Your version has been optimized (specialized) for the calculation of a single eigenvalue,eigenvector. I do not mind. This is not a problem. But if I am going to add this code to LAMMPS, it is some small extra work for me. I don't mind. But II have to be able to explain what your code does. So I have more questions:
question:Suppose the original matrix ix 6x6 and has eigenvalues {0.1, 0.2, 0.5, 1, 2, 3}. After subtracting eigenvalue_offset, this would be {-9.9, -9.8, -9.5, -9, -8, -7}. Suppose that after Lanczos iteration is finished, the tridiagonal matrix ix 3x3. (In other words, k=3.) In this example, it seems like the tridiagonal matrix would probably have eigenvalues {-9.9, -9.8, -9.5} (not {-9, -8, -7}). In that case, If find_maximum=true, then -9.5 will be selected. If find_maximum=false, then -9.9 will be selected. Is this correct? question: Is the size of the tridiagonal matrix (k) chosen automatically?You don't have to explain how you choose k, I was just curious to know if it is chosen automatically. I suspect that k < matrix_size. Otherwise the algorithm would not be fast. If this is difficult to explain, you don't have to answer this question. This is not an important question. (I was curious.)
Thanks again for answering my questions so far. Andrew |
I have a correction to my previous message:
Alternatively
Sorry for so much difficulty. Andrew |
Thank you for your reply. Eigenvalue approximation by tridiagonal matrixSorry for not enough explaining about the In the example you mentioned, the matrix whose eigenvalues are {-9.9, -9.8, -9.5, -9, -8, -7} would be converted to a 3 by 3 tridiaonal matrix having the eigenvalues like {-9.9, ?, -7} not {-9.9, -9.8, -9.5}. Other situations would be something like
These approximated eigenvalues don't depend on There are 2 reasons why not to provide such functionality; First, I created the library to use it for quantum physics, in which it is rare that both of the highest energy state and the lowest energy state are needed simultaneously. Lanczos iterationThe Lanczos algorithm works for an
This "Lanczos iteration" corresponds to the loop here
k in the previous post).
So the next problem is when to stop the iteration or how to determine Thus we want to determine
The corresponding code is here ("p" of lambda-lanczos/include/lambda_lanczos/lambda_lanczos.hpp Lines 172 to 177 in 10d178e
The second reason mentioned in the previous section is related with this convergence. In fact, the Lanczos algorithm can calculate the 2nd, 3rd or more minimum (and maximum) eigenvalues simultaneously. But to do so I should provide interface to specify which ones must be calculated and track all of corresponding Lanczos vectors and convergences. Some future version may have such functionality, but current one doesn't. To know maximum-absolute eigenvalueIf you want to know the maximum-absolute eigenvalue |
About LAMMPS Sorry for confusing you about forking. As you probably noticed, I previously made a fork from Jacob's one directly. But after close it, I deleted it and forked the original LAMMPS repository. So I don't hesitate to delete the original one and fork from your one directly. But before adding my files and sending pull request, I'll reply your pull request on the lambda-lanczos repository. Thank you! |
Don't worry about this. Choose whichever method is easiest for you to submit the pull request. Either send a pull request to me, or send one to Jacob. We will accept it. |
I added a (somewhat complicated) example of a .travis.yml file which compiles and runs your test program using both g++ and clang++. You will have to go to https://travis-ci.org, log in using your git password, and click on the lambda-lanczos repository (from your list of repositories) to enable automatic testing before this will work. This was working with an earlier version of your test file (when using assert() statements, not gtest). However it is not yet working with your current version of lambda_lanczos_test.cpp (in the develop branch). I am not familiar enough with gtest to offer any advice.
I also added several optional testing features to your .travis.yml file. I commented all of them out. Feel free to uncomment the features you want:
For this to work, you will have to uncomment the lines in the .travis.yml file which add the "-coverage" argument to the compiler. Then visit https://codecov.io, log in with your github credentials, and click on your repository to enable coverage.
If you don't want this, edit your README.md, and remove the line at the top containing https://codecov.io, so that the badge does not appear.
I added a memory checker (valgrind). You might not need this. Valgrind is very useful for my work because I often use "new" and "delete" to allocate memory. (It seems like you don't do this.) However valgrind is still useful to report errors if you accidentally read (or write) past the end of an array. Currently, the valgrind memory checking code is commented out, but it's easy to enable it.
Additional testing with "shunit2". (You probably don't need this.)
Some of my testing programs will create a file after they are finished. For these programs, I need to read the contents of this file to see if my program behaved correctly. That's what "shunit2" is useful for. Here is an example of its use:
https://github.com/jewettaij/visfd/blob/master/.travis.yml
It invokes these files:
https://github.com/jewettaij/visfd/blob/master/tests/test_fluctuation_filter.sh
https://github.com/jewettaij/visfd/blob/master/tests/test_membrane_detection.sh
...
(Note: Some people prefer to use "bats" instead of "shunit2".)
Again, you probably don't need this for this repository.
This is currently commented out.
Hope this is useful!
Andrew