Skip to content

Conversation

jewettaij
Copy link
Contributor

@jewettaij jewettaij commented Feb 26, 2020

Summary:

LambdaLanczos::run() now sets eigenvalue_offset to ±r automatically (where r=max_j{Σ_i|Aij|}), depending on find_maximum.

(Special case: If eigenvalue_offset was specified in advance by the caller to a non-zero value, then it will not be modified.)

Quote from your README.md file:

"If you want to calculate the maximum eigenvalue, you should use eigenvalue_offset = r. To calculate the minimum eigenvalue eigenvalue_offset = -r"

If you decide to accept this pull request, perhaps we should also change README.md file (because it is no longer necessary for the user to choose eigenvalue_offset).

Request

This pull request contains the code I now use in the LAMMPS version of LambdaLanczos. If this new code is incorrect, please let me know.

GTEST failure

In my tests, this new code works well.

Unfortunately, one of the google-tests fails:

Error message (short version)

"the difference between correct_eigvec[i]sign and eigvec[i] is 1.3480605520754807e-11 which slightly exceeds std::abs(correct_eigvalueengine.eps*10) which is 9.9905011031062095e-12.
correct_eigvec[i]*sign evaluates to 0.21358131557897264, and eigvec[i] evaluates to 0.21358131559245325

I am not sure whether I should be concerned by this small difference.

Error message (long version)

1: /home/travis/build/jewettaij/lambda-lanczos/test/lambda_lanczos_test.cpp:330: Failure
1: The difference between correct_eigvec[i]*sign and eigvec[i] is 1.3480605520754807e-11, which exceeds std::abs(correct_eigvalue*engine.eps*10), where
1: correct_eigvec[i]*sign evaluates to 0.21358131557897264,
1: eigvec[i] evaluates to 0.21358131559245325, and
1: std::abs(correct_eigvalue*engine.eps*10) evaluates to 9.9905011031062095e-12.
1: [  FAILED  ] DIAGONALIZE_TEST.RANDOM_SYMMETRIC_MATRIX (0 ms)
1: [----------] 5 tests from DIAGONALIZE_TEST (2 ms total)
1: 
1: [----------] Global test environment tear-down
1: [==========] 7 tests from 2 test cases ran. (2 ms total)
1: [  PASSED  ] 6 tests.
1: [  FAILED  ] 1 test, listed below:
1: [  FAILED  ] DIAGONALIZE_TEST.RANDOM_SYMMETRIC_MATRIX

I am curious to see whether you accept this pull request.

I don't know if I remembered to thank you for your long explanations of the Lanczos algorithm.
Your explanation is much clearer than the wikipedia article.

Cheers

Andrew

…ream (mrcdr's) Makefile. Hopefully after this commit, the only differences will be in the "lambda_lanczos_util.hpp" file.
….hpp* file and corrected 2 mistakes. The code now compiles and runs successfully again.
…ximum (...unless it was specified in advance by the caller)
@jewettaij
Copy link
Contributor Author

I forgot to mention: The only file that was changed was lambda_lanczos.hpp.

please ignore changes to lambda_lanczos_util.hpp

However when you look at the changes, it will appear as though I also modified lambda_lanczos_util.hpp. The only reason I modified this file was to undo the changes I had made so that it matches your current version in your develop branch. In other words, this pull request contains the version of lambda_lanczos_util.hpp from your develop branch on 2020-2-25. (It does not matter. The remaining code in this pull request works with either version of this file.)

@mrcdr
Copy link
Owner

mrcdr commented Feb 29, 2020

Your code seems correct mathematically. However, the routine to determine eigenvalue_offset costs as much as full (tri-)diagonalization of the Lanczos algorithm, which calls mat_mul n times. Practically, we just have to call it much fewer times than n to calculate an extreme eigenvalue and that is why the Lanczos algrorithm works efficiently for huge matrices.

Actually, this is the weak point of my library; since there is no way to access matrix elements directly due to the use of lambda, we can't calculate eigenvalue_offset in O(n^2) inside the library. I'm sorry but I think my code should be optimal for large matrices.

I think the test failure you saw arises from this. Since I set the threshold unnecessarily strict, don't worry about the error (I should relax it, but I don't know how to determine it mathematically correct...).

@jewettaij
Copy link
Contributor Author

jewettaij commented Mar 2, 2020

Your code seems correct mathematically. However, the routine to determine eigenvalue_offset costs as much as full (tri-)diagonalization of the Lanczos algorithm, which calls mat_mul n times. Practically, we just have to call it much fewer times than n to calculate an extreme eigenvalue and that is why the Lanczos algrorithm works efficiently for huge matrices.

I am glad that you do not object to my code on mathematical grounds.

Actually, this is the weak point of my library; since there is no way to access matrix elements directly due to the use of lambda, we can't calculate eigenvalue_offset in O(n^2) inside the library. I'm sorry but I think my code should be optimal for large matrices.

I agree.

It is better (more efficient) for the users to calculate eigenvalue_offset manually. That is why I do not invoke my new code if eigenvalue_offset != 0. My code is only invoked if the user did not choose eigenvalue_offset. My code is a safeguard to protect lazy/stupid users.

I suspect that a fraction of users will be too lazy, impatient, or confused to figure out how to calculate eigenvalue_offset.

I worry that unless we provide them with a default eigenvalue_offset, many lazy users might not set eigenvalue_offset and could get incorrect results. (?) (But I also worry that if you accept this code, then most users will ignore eigenvalue_offset, and consequently they will not get fast performance from your code, although their results will be correct.)

Question: Which do you prefer?

  1. answer their questions to explain why the result is inaccurate?
  2. answer questions about why your code is not as fast as they were expecting?
  3. Ignore questions from these clueless users. (This is a reasonable choice.)

If you prefer # 2, then use my code.

I think the test failure you saw arises from this. Since I set the threshold unnecessarily strict, don't worry about the error (I should relax it, but I don't know how to determine it mathematically correct...).

Thank you for addressing this concern.
Your reply is very helpful to me.

I can proceed with changing the LAMMPS code now.

@jewettaij
Copy link
Contributor Author

jewettaij commented Mar 2, 2020

I should not have suggested that we change the README.md file to remove the discussion of eigenvalue_offset. (I was incorrect to suggest this. I agree that it is better for users to choose the eigenvalue_offset, instead of relying on my code. I confess that when I created the pull request, I did not think carefully enough about efficiency.)

@jewettaij
Copy link
Contributor Author

jewettaij commented Mar 2, 2020

My apologies.
Today I realized that many users do not need to specify the eigenvalue_offset. (If the matrix is positive definite, it is better to leave eigenvalue_offset=0. My code will make the method very slow and it provides no benefit in these cases.)

I am feeling a bit foolish. I am closing this pull request.

(I will also remove my modification from the LAMMPS version of LambdaLanczos. I will add more comments to that version of the code explaining the importance of the eigenvalue_offset. I wanted to avoid discussing the eigenvalue_offset, but it is necessary.)

@jewettaij jewettaij closed this Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants