-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: external function varchar return type validation #3400
Conversation
11c5fcf
to
92de81c
Compare
92de81c
to
e8f70e3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with a few notes (we can address this on our side):
- add an entry in the migration guide
- add an acceptance test (for VARCHAR with and without parameter)
- (nit) update the comment above the change
/ok-to-test sha=e8f70e390b28b28a9a4f4781fe91fdf59f08590f |
Integration tests failure for e8f70e390b28b28a9a4f4781fe91fdf59f08590f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to add to what @sfc-gh-jmichalak wrote. We need to validate the previous behavior and the proposed correctness (that's why we require at least acceptance tests). Ideally, we should address all the data types that are potentially affected (I doubt that VARCHAR is the only invalid one here). Because of that, adding an integration test to the DESCRIBE behavior should be added to the tests set (similar to e.g.
for _, tc := range []string{ |
- We introduce all the necessary changes (for all the data types) and tests while working on the
snowflake_external_function
resource stabilization. We can't share any timelines at the moment, but for sure after GA. - You continue to work on this PR with our guidance and following our contribution guidelines. Here there are two suboptions:
- you follow the missing parts described in 1.
- you add the bare minimum needed, which would be: adding acceptance tests only for VARCHAR documenting the error and solution, and checking the behavior with VARCHAR parameters and without them - we will handle all other data types and tests later on.
Please let us know which option would you prefer.
@@ -407,7 +407,7 @@ func ReadContextExternalFunction(ctx context.Context, d *schema.ResourceData, me | |||
case "returns": | |||
returnType := row.Value | |||
// We first check for VARIANT or OBJECT | |||
if returnType == "VARIANT" || returnType == "OBJECT" { | |||
if returnType == "VARIANT" || returnType == "OBJECT" || returnType == "VARCHAR" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about other types (like number, timestamp, etc.)? They also come in with-parameters and without-parameters flavours and they should be addressed too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really imo it seems much easier to just remove all the validation logic and let Snowflake handle it server-side on apply. Snowflake already has validation logic and it seems counterproductive trying to reproduce/reverse engineer that closed source logic in an OSS project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned, the data types will be reworked in the provider. This logic will probably be trashed either way.
However, we need to handle the data type logic on the provider side to correctly handle plans and diff suppressions; we can't just remove it and hope for the best; these resources would be unusable.
Additionally, there are users, like you, who would prefer the "applies" to fail (full sf side validation), but there are also those who would like to have almost 100% parity between successful "plans" and successful "applies." The current approach lies somewhere in the middle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, we need to handle the data type logic on the provider side to correctly handle plans and diff suppressions; we can't just remove it and hope for the best; these resources would be unusable.
I'm not sure what you mean here. Diff suppression is separate and orthogonal to config validation. It is very common practice in providers to do minimal config validation and allow the API to do full validation. Ref: AWS Terraform Provider
but there are also those who would like to have almost 100% parity between successful "plans" and successful "applies."
That's exactly the issue here. VARCHAR(10)
plans correctly but fails to apply since VARCHAR(10)
is submitted to Snowflake but VARCHAR
is returned from Snowflake which fails validation when Terraform tries to save/update state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's exactly the issue here.
VARCHAR(10)
plans correctly but fails to apply sinceVARCHAR(10)
is submitted to Snowflake butVARCHAR
is returned from Snowflake which fails validation when Terraform tries to save/update state
No argument here: we know this preview resource is not handling data types correctly, we did not come up with this logic, and we will correct it while reworking this resource.
I'm not sure what you mean here. Diff suppression is separate and orthogonal to config validation. It is very common practice in providers to do minimal config validation and allow the API to do full validation. Ref: AWS Terraform Provider
Unfortunately, some logic is needed if you want to handle both the user config changes and the external changes properly and, at the same time, allow users to use data types without attributes (using default ones, e.g. NUMBER
= NUMBER(38, 0)
). For example, you need to be able to differentiate between these situations:
- User sets VARCHAR(10) in config; then, they change it to VARCHAR (change should happen)
- User sets VARCHAR(10) in config; Snowflake returns VARCHAR (change should be suppressed)
Other example:
- User sets NUMBER in the config, Snowflake returns NUMBER(38, 0) (change should be suppressed)
- User sets NUMBER in the config; then, they change it to NUMBER(38, 0) (change should happen - or not, depending on the approach we take; the question is how do we want to handle the default change in Snowflake and how would it be performed on SNowflake side for the existing objects)
- User sets NUMBER in the config; later on, Snowflake returns NUMBER(38, 2) (change should happen)
All these situations will be handled correctly when we rework this preview resource, which will not happen any time soon.
e8f70e3
to
dbb388c
Compare
Signed-off-by: Nick Venenga <nick.venenga@15five.com>
dbb388c
to
c977245
Compare
Really it probably also needs a diff suppression function and potentially other changes. I don't know very much about datatypes and their intended behavior. The main goal is to be able to use the provider for an external_function with return_type VARCHAR which currently isn't possible (an error is encountered with and without specifying a size) |
/ok-to-test sha=c97724548d8e0bdbe3c03814ffc9f6b301cc4811 |
Integration tests failure for c97724548d8e0bdbe3c03814ffc9f6b301cc4811 |
1 similar comment
Integration tests failure for c97724548d8e0bdbe3c03814ffc9f6b301cc4811 |
@nijave Please merge our |
I think what's happening is the slash command sends Looks like when using slash commands, you need to override the ref for the That said, probably easiest work around is to pull my branch and push to Snowflake-Labs/terraform-provider-snowflake and open a new PR |
Thanks for the detailed explanation! We had a team discussion about this and we agreed there's a problem with our dispatch pipeline (may be related to our change from We will address this issue soon because it affects all community PRs. I ran the tests locally, and everything seems fine, so I'm merging this for now. This fix will be included in the next release. Please remember that this resource will be reworked after GA; this includes potential breaking changes. |
thanks guys! |
Fixes #3392