Skip to content

feat: remove redundant additions #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

zac-williamson
Copy link

reduced benchmarks to only test public methods

improved performance by ~5-10%

Description

Old version of lib did not have much documentation about the algorithm - this PR adds extensive comments.

In addition, some redundant addition gates were removed from the SHA512 permutation which reduces the cost of the permutation by ~10%

PR Checklist*

  • I have tested the changes locally.
  • I have formatted the changes with Prettier and/or cargo fmt on default settings.

reduced benchmarks to only test public methods

improved performance by ~5-10%
@github-project-automation github-project-automation bot moved this to 👀 To Triage in Noir Libraries Mar 3, 2025
@Savio-Sou Savio-Sou moved this from 👀 To Triage to 📝 To Review in Noir Libraries Mar 3, 2025
src/lib.nr Outdated
@@ -89,9 +273,10 @@ pub mod sha512 {

}

// methods here are interface-compatible with stdlib
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sha256 and sha512 have been removed from the stdlib so we don't need to maintain compatibility btw.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about the sha256 blackbox interface? that's the one I want to maintain compatibility with tbh

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sha256 blackbox interface

To be clear, do you mean std::hash::sha256_compression here? That's the only blackbox function related to sha256 and the only sha256 logic in the stdlib.

https://github.com/noir-lang/noir/blob/0a64e8ee6497ef228a92f42af0bdb5081025bf37/noir_stdlib/src/hash/mod.nr#L10-L13

I doubt this is the case based on the code this comment is on.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see you have a similar comment on the implementation of the compression function here. I've commented on a different part of the code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is just about the black-box compression function

@TomAFrench TomAFrench changed the title chore: added comments, improved performance by 5-10% feat: remove redundant additions Mar 4, 2025
@TomAFrench
Copy link
Member

@kashbrti most of my comments have been addressed (just the thing about stdlib compatibility unresolved) so it's ready for a review from you.

@@ -0,0 +1,248 @@
use crate::tables::{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this file is just replacing the table computation code in the tables.nr file I guess

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes that's correct

@kashbrti
Copy link
Contributor

@zac-williamson the diff for this PR is quite massive. there's lots of refactoring happening, there are added comments and optimizations. I believe we should test this implementation against a rust SHA-512 lib using noir-runner (I can submit a PR for this later).
Other than that, could you please add comments (on the PR) where the optimizations are happening as I think that's the most crucial part to check properly.

Thanks for the extensive comments and the refactoring :)

@TomAFrench
Copy link
Member

@kashbrti you could split out some of the less critical changes out into separate PR(s). Pulling out the table computation + benchmarks would remove a good amount of the diff.

@Savio-Sou Savio-Sou moved this from 📝 To Review to 🚧 Blocked in Noir Libraries Mar 20, 2025
@Savio-Sou Savio-Sou moved this from 🚧 Blocked to 📋 To Do in Noir Libraries Mar 20, 2025
let temp1 = state.h.raw + S1 + choose_output + (SHA512_ROUND_CONSTANTS[i] as Field) + w[i];
// temp1 is a linar sum of ~50 inputs. Without std::as_witness the addition gates required to eval are duplicated
// for both `raw_e` and `raw_a`
std::as_witness(temp1); // temp1 = 106 gates
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optimisation

// 34 30
// 39 25
EncodedChoose { raw: e, e: e_ror0, ror14, ror18, ror41 }
let mut e_ror0: Field = base4_encoded_slices[0];
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see encode_message_extension comment. same reasoning

Copy link

Changes to circuit sizes

Generated at commit: 92692d5c9fde98c7f49c55355a08f5d3a1fa0032, compared to commit: 309404b501d935b7856e1788530cba44abe44ad2

🧾 Summary (10% most significant diffs)

Program ACIR opcodes (+/-) % Circuit size (+/-) %
test_sha384_256.json +244 ❌ +0.64% -7,051 ✅ -6.70%
test_sha512_256.json +244 ❌ +0.64% -7,051 ✅ -6.70%

Full diff report 👇
Program ACIR opcodes (+/-) % Circuit size (+/-) %
test_sha384_100.json 13,174 (+76) +0.58% 41,247 (-2,329) -5.34%
test_sha512_100.json 13,190 (+76) +0.58% 41,247 (-2,329) -5.34%
test_sha384_1.json 13,075 (+76) +0.58% 41,123 (-2,329) -5.36%
test_sha512_1.json 13,091 (+76) +0.58% 41,123 (-2,329) -5.36%
test_sha384_256.json 38,593 (+244) +0.64% 98,216 (-7,051) -6.70%
test_sha512_256.json 38,609 (+244) +0.64% 98,216 (-7,051) -6.70%

Copy link
Contributor

@kashbrti kashbrti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than a few nitpicks, it looks good to me.

EncodedWitness { raw: w, w: ror0, ror1, ror8, ror19, ror61, rshift6, rshift7 }
// W2_POWER_SUM represents scalars that each base4_encoded_slices member must be multiplied by, in order to produce
// `encode(raw.rotate19) + encode(raw.rotate61) + encode(raw >> 6)`
let W2_POWER_SUM: [Field; 12] = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't this a global?


// W15_POWER_SUM represents scalars that each base4_encoded_slices member must be multiplied by, in order to produce
// `encode(raw.rotate1) + encode(raw.rotate8) + encode(raw >> 7)`
let W15_POWER_SUM: [Field; 12] = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

src/lib.nr Outdated
@@ -89,9 +273,10 @@ pub mod sha512 {

}

// methods here are interface-compatible with stdlib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is just about the black-box compression function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 📋 To Do
Development

Successfully merging this pull request may close these issues.

3 participants