-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: remove non-test sqlx macro usage #437
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
@@ -102,9 +102,9 @@ pub struct SbtcRequests { | |||
/// constructing their next UTXO. | |||
pub signer_state: SignerBtcState, | |||
/// The minimum acceptable number of votes for any given request. | |||
pub accept_threshold: u32, | |||
pub accept_threshold: u16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with changing this to u16
because I don't perceive much risk of there being 65535
signers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'd also have to change the smart contracts -- or discontinue noting an accurate signer bitmap on chain -- if there were more than 128 signers.
let txid = bitcoin::Txid::from_byte_array( | ||
request.txid.try_into().map_err(|_| Error::TypeConversion)?, | ||
); | ||
|
||
let vout = request | ||
.output_index | ||
.try_into() | ||
.map_err(|_| Error::TypeConversion)?; | ||
|
||
let max_fee = request | ||
.max_fee | ||
.try_into() | ||
.map_err(|_| Error::TypeConversion)?; | ||
let vout = request.output_index; | ||
|
||
let signer_bitmap = BitArray::ZERO; // TODO(326): Populate | ||
|
||
let amount = request | ||
.amount | ||
.try_into() | ||
.map_err(|_| Error::TypeConversion)?; | ||
|
||
Ok(Self { | ||
outpoint: OutPoint { txid, vout }, | ||
max_fee, | ||
max_fee: request.max_fee, | ||
signer_bitmap, | ||
amount, | ||
amount: request.amount, | ||
deposit_script: ScriptBuf::from_bytes(request.spend_script), | ||
reclaim_script: ScriptBuf::from_bytes(request.reclaim_script), | ||
signers_public_key, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of doing the decomposition up front?
let model::DepositRequest {
txid,
output_index,
spend_script,
reclaim_script,
recipient: _,
amount,
max_fee,
sender_addresses: _,
} = request;
let txid = bitcoin::Txid::from_byte_array(
txid.try_into().map_err(|_| Error::TypeConversion)?,
);
let vout = output_index;
let signer_bitmap = BitArray::ZERO; // TODO(326): Populate
Ok(Self {
outpoint: OutPoint { txid, vout },
max_fee,
signer_bitmap,
amount,
deposit_script: ScriptBuf::from_bytes(spend_script),
reclaim_script: ScriptBuf::from_bytes(reclaim_script),
signers_public_key,
})
I'm mostly curious, you don't have to do this. Same applies to the other function that does something similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the approach that does not do the decomposition a little easier to read. That's because I only need to look at the lines in return type to see what values were set to any particular field (assuming request is immutable). Under the decomposition approach, there might be some intermediate change to a variable that I need to look out for. Also, the non-decomposition approach is fewer lines of code and doesn't seem much more complex. But the destructuring done in this example is easy enough to read so I wouldn't sweat it if I were reviewing.
Anyway, I think that's the general "principle" for me when I'm writing code. It's not set in stone for me, so I'm also curious about how you (and others) think about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whereever possible I see amount: request.amount
and want to see it be just amount
. That said, I actually do agree with your position, I was just curious what you thought.
79afc74
to
14e6eb6
Compare
Saw the two new changes. They look good to me. |
Description
Addresses part of #411.
There are a number of paths that can address the linked issue. The path taken here is to remove our sqlx macro usage. Thing is, this PR doesn't complete remove our macro usage, since we still use it for tests. Once we remove our macro usage in tests then we can easily bump our stacks core dependency. I went this path partly because it allowed for a few simplifications in the code base.
Like #436, this was done because I needed something mentally simple to work on in transit.
Changes
sqlx
macro usage outside of integration test setup.sqlx-cli
as a quasi- dev requirement.amount
andmax_fee
typesu64
s. This is possible because we aren't using the sqlx macros for our queries.The following are the more questionable changes
thresholds
, andnum_signers
integersu16
types. I think that we'll have other issues if these numbers are greater than a few hundred in practice, sou16
seems fine.u16
too. This will usually be much less than the expected number of bitcoin blocks in a year, so au16
is fine. But maybe it's better for au32
here instead.DbRead
andDbWrite
traits. I don't feel too strongly about the change but it allows for things to be much simpler as well.My apologies to @netrome for removing our sqlx macro usage.
Testing Information
If tests pass then this is probably fine. I should also check the the values in the postgres implementations of the
DbRead
andDbWrite
trait functions. Right now I just do nakedas
casts, which is not strictly correct, but very likely fine in practice. I'll probably change it later.Checklist: