-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amplify push got error "Message: Resource is not in the state stackUpdateComplete" #92
Comments
Hi @lehoai Can you share you Also can you share the debug logs present here : Also may I know which amplify version you are using? |
Got the same issue. very problematic, not able to push anything(dev or production-. My log are 20k line long.
I try using the last version of amplify cli 7.6.23 here is some part of the log with the rollback starting |
I have the same issue, I pushed a few destructive changes to my GraphQL model and it failed because a token expired during the push.
My cli version is 7.6.23, using Cloud9 instance. Only some destructive updates to the model, since the last push. Recreation of tables not an issue right now. |
I did some research and there are tons of "Resource not in state stackUpdateComplete" issues that where never solved but simply closed. The guys have probably all recreated the whole environments. Related issues containing "stackUpdateComplete" start at aws-amplify/amplify-cli#82 and goes all the way up to #95, with still 18 open issues and 149 closed issues including this one (#9925). Drifts in the stacks happen, you need to handle this properly IMHO. In my case it was the long time the update took, which caused a token to expire. Amazon are you watching this very common ? This is a no go. This is clearly a high priority issue that should be solved once and for all times. EDIT:I could find some more hints, if you change more than 2 indexes in a graphql model, the push fails, complaining about that too many parallel indexes where changed or deleted on a single table, I cant find the log about it, it was displayed on the terminal, but I lost it. The amplify log file in ~/amplify/logs/ is too long already, I can't find anything. After that, you will get the message "Resource is not in the state stackUpdateComplete" and you are stuck. The only way to get it back to work is the deletion and recreation of the environment. Steps I did (on your own risk):
The whole procedure takes a long time, is there a better or faster way to do it? EDIT:I'm very sure now, that this is caused by too many simultaneous dynamodb index updates on the same table, which hits on a dymanodb limit and that the expired token message was somehow only related to the first error. This also explains why some many people encounter this switching from V1 to V2 graphql models. So we have two issues:
|
@akshbhu |
I was able to go though, pushing my changes step by step
Issue cames from a @Index error, my bad, but at least an error message can be helpful. |
@batical |
@lehoai |
Issue #88 is also related, I had also tried everything, including the deletion of the "deployment-state.json" file in the corresponding amplify stage bucket. The issue described there is very similar. Only that in my case nothing helped. @lehoai I would give it a try and delete the file and try pushing again, if you are lucky, you can get it back working. |
@ecc7220 |
@ecc7220 |
Hey @lehoai 👋 apologies for the delay! Can you share the CloudFormation errors that are printed prior to receiving the |
Hi @lehoai, In addition to the schema errors which Josef has mentioned above specifically, could you also provide a bit more data your environment? There are 2 things which we'd like to understand in a bit more details, auth token expiration and it's impact on your deployment, and the contents of the deployment itself.
This will help us get understanding of the changes being applied during the deployment. We can set up a call if you'd like as well, rather than sharing the schema publicly on GH, you can reach out to amplify-cli@amazon.com |
@alharris-at al
I've checked the error log many times, then there is only one reason "Resource is not in the state stackUpdateComplete", no more. ( i know, this error sometimes shows up when other error occurs, but not in my case, only "Resource is not in the state stackUpdateComplete" is thrown). |
I see, thank you for the update @lehoai, we're going to create a new bug related to force push behavior in the AWS Console, which sounds related to what you're seeing here. Is there anything else specific we can help you out with on this issue? |
Hey @lehoai 👋 thank you for those details! To clarify, do you have the affected backend files and would you be willing to send us a zip archive to amplify-cli@amazon.com? If so, we would like to take a look and see if we are able to reproduce the issue using your backend definition as we have been unable to reproduce this ourselves. |
@josefaidt @alharris-at I think u should give more detail in the error log then the developers can investigate the cause. |
I run By the way, I didn't do any manual modification. (eg. from console etc) My workaround was removing the api and adding api again as below. backup my schema file |
Hey @lehoai no worries on sending the schema. We will continue to investigate this issue. @naingaungphyo are you seeing any CloudFormation errors outputted to your terminal and would you mind sharing the logs at |
@AnilMaktala Hello, so I went to S3 and deleted the deployment-state.json like others had it. After doing so, the push did go through this time - I was not immediately stopped by the index issue. Unfortunately, after it tried to deploy it for a bit, just like the first time, the same error eventually came up. I tried it twice for good measure. |
So...the solution is to stop using Amplify? Cheers |
So what is the right way of updating primary keys on a table? I am also encountering the same error when trying to do so |
For anyone still wondering, what worked for me was just deploying without the table that you are to change primaryKey of and than adding the updated table and re-deployed. Figured since the table will be deleted regardless because of the change of the index, I would delete it myself because just updating the primaryKey didn't work for me. |
@judygab Hi Judy, thanks for the heads up. I've tried deleting the table thorough Dynamo before then deploying but I still got the same error. Just to clarify, if you can, did you remove the table from your schema, delete the table through Dynamo, and then recompiled and deployed? And then did that successfully deploy, and then you were you able to add in the table back to your schema and deploy successfully? This was the sort of loop around I was hoping to find, so awesome if this works! |
For those who are still stuck. The root cause is a stuck deployment.json file that stores the deployment status. Ignore this horrible design choice and follow these steps.
|
Yes, so my steps were:
|
Ran into this as well. Deleting the deployment-status.json in s3 didn't work for me. |
What's the point |
C'mon guys. Please prioritise this critical issue. |
Ran into this as well, and it's funny because even pulling and trying to push afterwards doesn't work for me. |
Having the same problem. |
One issue I ran into was trying to change which field the primary key was. Couldn't push after that. |
I ran into the same issue. |
If loosing the existing data is not a problem, you can delete the existing schema/table and then re-create it again with the required keyes. |
I had the same problem. Removing the tables and re-adding one by one with the updated primary key as described above worked for me. |
In my case, I had the above error message with Storage, not the API. What helped in my case was to run At first, when changing settings, such as updating permissions, or adding a trigger, I was getting an error: In my case, I think what was causing the issue was that I had a Lambda trigger for S3 that was not set up properly, I removed the link between S3 and the Lambda tigger and re-linked it. I also have an Admin group for Auth, and I needed to add permissions for Storage for that group as well. I also had an override.ts for my Storage where I had a policy:
I think this policy may have been conflicting with the cli-inputs, so I commented out this policy. After making these changes, I stopped getting errors when running Just putting it out there since I didn't see anyone mention Storage as throwing the error message in the original post above. |
I'm not sure if this will be helpful or if it might be a special case, but I'll describe the situation where I resolved the same error. |
Could you check your solution on these reproductio steps? |
Not sure if anyone saw this but stack overflow top answer laid out best options. For me any change to schema.graphql would fail with the same error here. I had to
And the second time worked 🍭 . So perhaps deleting it 2-3 times.. is the trick.. 👀 lol |
Has there been any movement on this? I am getting this whenever I want to add an |
What worked for me is the following:
|
Today i had the same issue, looking for a soluction or workarround, I found that one of thing that produce this issues is a conflict change in the infra code, so I think that the env in the cloud was working fine, I just create a backup from Amplify folder in order to don't forget which changes were made so I delete the amplify folder and I just pull the entire env. It's just a band-aid; it's only a temporary fix until a permanent solution is found.
|
Before opening, please confirm:
How did you install the Amplify CLI?
npm
If applicable, what version of Node.js are you using?
No response
Amplify CLI Version
Using the latest version at amplify CI/CD
What operating system are you using?
Mac
Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.
No manual changes made
Amplify Categories
Not applicable
Amplify Commands
push
Describe the bug
I am using CI/CD which links with my GitHub master branch. The last few days ago, it work properly. But now when I try to merge source to master branch, I got the error:
[WARNING]: ✖ An error occurred when pushing the resources to the cloud
[WARNING]: ✖ There was an error initializing your environment.
[INFO]:
DeploymentError: ["Index: 1 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"] at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/iterative-deployment/deployment-manager.ts:159:40 at Interpreter.update (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:267:9) at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:112:15 at Scheduler.process (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:69:7) at Scheduler.flushEvents (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:60:12) at Scheduler.schedule (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:49:10) at Interpreter.send (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:106:23) at _a.id (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:1017:15) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5)
Then I try with amplify CLI, get the same error too.
Expected behavior
push success.
Reproduction steps
I add a @connection, a @key, and few @aws_subscribe, then push
GraphQL schema(s)
# Put schemas below this line
Log output
Additional information
No response
The text was updated successfully, but these errors were encountered: