-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Larger Finite Fields, take 2 #2997
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2997 +/- ##
=========================================
Coverage ? 83.64%
=========================================
Files ? 688
Lines ? 336639
Branches ? 0
=========================================
Hits ? 281566
Misses ? 55073
Partials ? 0
|
Working to clean up the test failures. Mostly straightforward, but there is a problem with tests in
This fails now because Also, there is a problem building |
If this is merged, we might want to add a virtual enumerator for Galois Fields instead of making a plain list. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some quick initial comments.
This looks as if it tries taking making larger GF-elements that lie in the small prime field (presumably under the assumption that they are stored differently internally), and checks that compression into the small-field vector format works. To make the exponent changeable, replace the
which writes down the same list without enumerating the large field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more nitpicks. But overall, this seems to be moving in a very promising direction, thank you!
The remaining test failures arise from interactions between |
4080ad9
to
3dd701c
Compare
Now just down to the The only difference I can see it making is that they might not need to be rebuilt if you switch back and forth between architectures and rebuild GAP each time, but I'm not sure of that. |
Putting those files into However, the I have "looking at |
@fingolfin Thanks a lot. Take your time. Although, before this PR gets too old, we need to decide whether to include it in 4.11, forcing the change on |
I'd like to avoid having it sit around for too long. I hope @sebasguts and me can make a NormalizInterface release soon. Once that has happened and was picked up, we can hopefully proceed with this PR. It would be nice to get some feedback from @jdebeule (whether he's OK with what was done to his PR ;-) first, too. |
Since I had no time to work further on the original PR, I am very grateful that this was picked up! So I am ok with what has been done to the original PR. |
I have now rebased this atop of PR #3137 (which should be merged first, and then we can rebase this PR once again). I also added the code by @jdebeule for regenerating the table of conway polynomials, though it is not yet fully hooked up (see the commit message of the WIP commit adding it for more details). Also, some of the changes in this PR were squashed by me into a single "WIP" commit -- I plan to break this up further later on, but for now, I wanted to make sure my changes so far are backed up and tested on GitHub ;-). |
f9cd0dd
to
f33d435
Compare
This has now been reduced to a relatively small set of changes (except for the about 7000 new generated lines in One thing that concerns me a bit about this PR is that the generated file |
@fingolfin Given that, as you say, it is not actually a problem on modern machines, I'm not inclined to worry about this. The root problem is that we need to number all the prime powers up to whatever our limit is, in a way that does not introduce race conditions, and allows reasonably quick lookup. It's not essential that the numbering be entirely dense, but we don't want to use more bits than we have to. Calculating it at startup (before we have multiple threads) is a bit slower than we'd like. Doing it on demand (so that the tables would be populated as far as the largest q so far considered) would be possible, but adds complexity. Loading the data at run-time (for instance via mmap) would be possible, but really is just hiding the problem. One thing we might want to do is export this data to GAP level in some way, so that it is available for factoring, primality testing, etc. In the somewhat unlikely event that someone did want to build a 64 bit version of GAP with minimal memory footprint then (a) their main job will be to minimize how much of the library they load and (b) they can easily configure it to use fewer than 24 bits worth of finite fields. |
This is a rebased and somewhat reworked version of #2846. The biggest addition is that the lookup tables are computed at build time using
etc/ffgen.c
which has been somewhat updated. The decision about what size of internal finite fields to use on what architectures is made entirely inetc/ffgen.c
, in principle it should be possible to go to 23 bits or 25 bits just by editing that programme.I've done quite a bit of refactoring of the finite fields code in the process, mostly introducing inline functions and removing some duplication.
Still to do is updates to tests, documentation and comments.
I'd welcome @fingolfin looking over this, especially the changes to the makefiles, and @jdebeule to confirm I haven't done anything too stupid to his code.