Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Amplify push got error "Message: Resource is not in the state stackUpdateComplete" #92

Open
5 tasks done
lehoai opened this issue Mar 7, 2022 · 89 comments
Open
5 tasks done

Comments

@lehoai
Copy link

lehoai commented Mar 7, 2022

Before opening, please confirm:

  • I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
  • I have searched for duplicate or closed issues.
  • I have read the guide for submitting bug reports.
  • I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
  • I have removed any sensitive information from my code snippets and submission.

How did you install the Amplify CLI?

npm

If applicable, what version of Node.js are you using?

No response

Amplify CLI Version

Using the latest version at amplify CI/CD

What operating system are you using?

Mac

Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.

No manual changes made

Amplify Categories

Not applicable

Amplify Commands

push

Describe the bug

I am using CI/CD which links with my GitHub master branch. The last few days ago, it work properly. But now when I try to merge source to master branch, I got the error:
[WARNING]: ✖ An error occurred when pushing the resources to the cloud
[WARNING]: ✖ There was an error initializing your environment.
[INFO]: DeploymentError: ["Index: 1 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"] at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/iterative-deployment/deployment-manager.ts:159:40 at Interpreter.update (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:267:9) at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:112:15 at Scheduler.process (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:69:7) at Scheduler.flushEvents (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:60:12) at Scheduler.schedule (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:49:10) at Interpreter.send (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:106:23) at _a.id (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:1017:15) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5)

Then I try with amplify CLI, get the same error too.

Expected behavior

push success.

Reproduction steps

I add a @connection, a @key, and few @aws_subscribe, then push

GraphQL schema(s)

# Put schemas below this line

Log output

# Put your logs below this line


Additional information

No response

@akshbhu
Copy link
Contributor

akshbhu commented Mar 8, 2022

Hi @lehoai

Can you share you gql schema and categories you have added so that I can reproduce it on my end ?

Also can you share the debug logs present here : ~/.amplify/logs/amplify-cli-<issue-date>.log

Also may I know which amplify version you are using?

@batical
Copy link

batical commented Mar 11, 2022

Got the same issue. very problematic, not able to push anything(dev or production-.

My log are 20k line long.

2022-03-11T12:00:38.110Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:00:38.110Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:00:38.111Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:00:39.234Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:04:41.051Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Waiting for DynamoDB indices to be ready"}])
2022-03-11T12:04:44.266Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:04:44.493Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:04:44.493Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:04:44.494Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:04:45.634Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:08:47.462Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Waiting for DynamoDB indices to be ready"}])
2022-03-11T12:09:51.696Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:09:51.920Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:09:51.921Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:09:51.924Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:09:52.941Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:21:58.236Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:21:58.239Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (1 of 3)"}])
2022-03-11T12:21:58.240Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:21:58.241Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:21:58.242Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:21:58.478Z|error : amplify-provider-awscloudformation.deployment-manager.startRolbackFn([{"index":2}])
Error: Cannot start step then the current step is in ROLLING_BACK status.
2022-03-11T12:21:59.401Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:26:01.228Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:26:04.433Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:26:04.648Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:26:04.650Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:26:04.654Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:26:05.732Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:30:07.578Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])

I try using the last version of amplify cli 7.6.23

here is some part of the log with the rollback starting

@ecc7220
Copy link

ecc7220 commented Mar 11, 2022

I have the same issue, I pushed a few destructive changes to my GraphQL model and it failed because a token expired during the push.

2022-03-11T12:22:24.363Z|error : amplify-provider-awscloudformation.aws-s3.uploadFile.s3([{"Key":"[***]ment-[***]json","Bucket":"[***]ify-[***]pool-[***]ing-[***]316-[***]ment"}]) ExpiredToken: The provided token has expired. 2022-03-11T12:22:24.363Z|error : amplify-provider-awscloudformation.deployment-manager.startRolbackFn([{"index":2}]) ExpiredToken: The provided token has expired. 2022-03-11T12:22:38.638Z|error : amplify-provider-awscloudformation.deployment-manager.getTableStatus([{"tableName":"[***]er-[***]6sfqmjiikqe-[***]ing"}]) ExpiredTokenException: The security token included in the request is expired
I tried it again and got:

2022-03-11T12:24:18.467Z|info : amplify-provider-awscloudformation.deployment-manager.rollback([{"spinner":"Waiting for previous deployment to finish"}]) 2022-03-11T12:24:18.526Z|error : amplify-provider-awscloudformation.deployment-manager.DeploymentManager([{"stateValue":"failed"}]) DeploymentError: ["Index: 3 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]
Then I pulled the lasted env and pushed and got again:

2022-03-11T12:35:34.798Z|info : amplify-provider-awscloudformation.deployment-manager.rollback([{"spinner":"Waiting for previous deployment to finish"}]) 2022-03-11T12:35:34.834Z|error : amplify-provider-awscloudformation.deployment-manager.DeploymentManager([{"stateValue":"failed"}]) DeploymentError: ["Index: 3 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]
How can I solve this, I'm stuck, please provide steps to fix this, even if I need to remove some stuff, I need to get it working today!

My cli version is 7.6.23, using Cloud9 instance. Only some destructive updates to the model, since the last push. Recreation of tables not an issue right now.

@ecc7220
Copy link

ecc7220 commented Mar 11, 2022

I did some research and there are tons of "Resource not in state stackUpdateComplete" issues that where never solved but simply closed. The guys have probably all recreated the whole environments.

Related issues containing "stackUpdateComplete" start at aws-amplify/amplify-cli#82 and goes all the way up to #95, with still 18 open issues and 149 closed issues including this one (#9925). Drifts in the stacks happen, you need to handle this properly IMHO. In my case it was the long time the update took, which caused a token to expire.

Amazon are you watching this very common ?

This is a no go.

This is clearly a high priority issue that should be solved once and for all times.
Recovery from push failures and stack drifts is very essential, this should simply just work all the time, like a file system.

EDIT:

I could find some more hints, if you change more than 2 indexes in a graphql model, the push fails, complaining about that too many parallel indexes where changed or deleted on a single table, I cant find the log about it, it was displayed on the terminal, but I lost it. The amplify log file in ~/amplify/logs/ is too long already, I can't find anything. After that, you will get the message "Resource is not in the state stackUpdateComplete" and you are stuck. The only way to get it back to work is the deletion and recreation of the environment. Steps I did (on your own risk):

  1. Save all your data that is on the amplify controlled amplify storage, also archive the whole environment.
  2. Backup all data in your database tables, you will need later to import that data back into new tables. In step 5, the env will be delete and all tables and storage s3 buckets will be deleted! Be careful, you are on your own risk.
  3. Create a new env with: amplify env add your_new_env_name
  4. You will be now in your new env, the code and backend config should still be there. You can see that with an: amplify status
  5. Now delete your old env: amplify env remove broken_env_name (on your own risk)
  6. Create a new env with the same name, this env will work: amplify env add broken_env_name
  7. you should be in the new env now, check with: amplify status
  8. Push your backend into the new env, this will take a while...
  9. Now it's the time to reimport all your saved data from step 1 and 2.

The whole procedure takes a long time, is there a better or faster way to do it?

EDIT:

I'm very sure now, that this is caused by too many simultaneous dynamodb index updates on the same table, which hits on a dymanodb limit and that the expired token message was somehow only related to the first error. This also explains why some many people encounter this switching from V1 to V2 graphql models.

So we have two issues:

  1. A failed push, for what ever reason needs to be recoverable.
  2. GraphQL model updates trigger occasionally simultaneous index updates or deletions which abort the running amplify push, leaving you with an unrecoverable state.

@lehoai
Copy link
Author

lehoai commented Mar 12, 2022

Screen Shot 2022-03-12 at 9 35 16 AM

Screen Shot 2022-03-12 at 9 36 47 AM

@akshbhu
Sorry, i was so busy.
I've changed my schema as above, and a few @subscription and @function.
@ecc7220
I cant create a new env because this is a production env.

@batical
Copy link

batical commented Mar 12, 2022

I was able to go though, pushing my changes step by step

  • pull env --restore
  • pushing multiple changes one by one until I found the error. Took me at least 4 hours ...

Issue cames from a @Index error, my bad, but at least an error message can be helpful.

@lehoai
Copy link
Author

lehoai commented Mar 12, 2022

@batical
Can u give me more info?
The amplify error log is useless at all!

@ecc7220
Copy link

ecc7220 commented Mar 12, 2022

@lehoai
I don't know how to recover from this error without recreating the env, sorry, but somebody with deeper knowledge needs to help out somehow. The only thing I could find out, is the cause of the error. Which comes from the index issue I mentioned above. So, don't change too much at once in the model, it could lead to a broken env.

@ecc7220
Copy link

ecc7220 commented Mar 12, 2022

Issue #88 is also related, I had also tried everything, including the deletion of the "deployment-state.json" file in the corresponding amplify stage bucket. The issue described there is very similar. Only that in my case nothing helped.

@lehoai I would give it a try and delete the file and try pushing again, if you are lucky, you can get it back working.

@lehoai
Copy link
Author

lehoai commented Mar 13, 2022

@ecc7220
Thanks. I will try.

@ecc7220
Copy link

ecc7220 commented Mar 13, 2022

@lehoai if this is not working try also the solution for #88.
This includes modifying the deployment bucket as well.
It is a much better solution, as my solution recreating everything.
If I have next time a similar issue, I will try the suggested modification in #88.
Good luck!

@lehoai
Copy link
Author

lehoai commented Mar 21, 2022

@ecc7220
Thanks, I will try.
I promise you this is the last time I work with Amplify. Terrible tool ever: Unstable, slow support, there is a lot of bugs!

@josefaidt
Copy link
Contributor

Hey @lehoai 👋 apologies for the delay! Can you share the CloudFormation errors that are printed prior to receiving the Resource is not in the state stackUpdateComplete message? Typically when we see this error the CloudFormation errors provide additional insight

@josefaidt josefaidt added the p1 label Mar 31, 2022
@alharris-at
Copy link
Contributor

alharris-at commented Apr 4, 2022

Hi @lehoai, In addition to the schema errors which Josef has mentioned above specifically, could you also provide a bit more data your environment? There are 2 things which we'd like to understand in a bit more details, auth token expiration and it's impact on your deployment, and the contents of the deployment itself.

  1. What auth mechanism are you using for your account? i.e. are you using user access/secret keys for a given user, or are you using something like STS to generate short-lived federated tokens?
  2. The schema you're starting out with when you kick off a deployment (previous schema).
  3. Schema you are attempting to deploy.

This will help us get understanding of the changes being applied during the deployment. We can set up a call if you'd like as well, rather than sharing the schema publicly on GH, you can reach out to amplify-cli@amazon.com

@lehoai
Copy link
Author

lehoai commented Apr 6, 2022

@alharris-at al
Sorry for the late reply.
Finally, we have to create a new env and redeploy the whole project. Then it worked. The old env was deleted. So I think it's not a problem with schema (i don't create or update any index).

  1. I link the amplify with github in aws console, so it automatically re-deploys every time the source code is merged. I don't use access/secret keys
    2, 3. as i said, i don't create or update any index, just add a few subscriptions and columns. I can't show the detail of the schema.

I've checked the error log many times, then there is only one reason "Resource is not in the state stackUpdateComplete", no more. ( i know, this error sometimes shows up when other error occurs, but not in my case, only "Resource is not in the state stackUpdateComplete" is thrown).

@alharris-at
Copy link
Contributor

I see, thank you for the update @lehoai, we're going to create a new bug related to force push behavior in the AWS Console, which sounds related to what you're seeing here. Is there anything else specific we can help you out with on this issue?

@josefaidt
Copy link
Contributor

Hey @lehoai 👋 thank you for those details! To clarify, do you have the affected backend files and would you be willing to send us a zip archive to amplify-cli@amazon.com? If so, we would like to take a look and see if we are able to reproduce the issue using your backend definition as we have been unable to reproduce this ourselves.

@lehoai
Copy link
Author

lehoai commented Apr 14, 2022

@josefaidt @alharris-at
Thank you for your response. Honestly, I really wanna share the detail of the schema, and backend files but there is an NDA contract so I cant.
I gave u everything I can share above, the error log, part of the schema...

I think u should give more detail in the error log then the developers can investigate the cause.

@naingaungphyo
Copy link

naingaungphyo commented Apr 26, 2022

I run amplify push and also got the same "Resource is not in the state stackUpdateComplete" error after changing many indexes and the primary key of one of my models.
I tried with this amplify/cli/json setting of enableIterativeGsiUpdates as true and also used --force and --allow-destructive-graphql-schema-updates cli flags according to this troubleshooting guide, but none of them works.

By the way, I didn't do any manual modification. (eg. from console etc)

My workaround was removing the api and adding api again as below.

backup my schema file
amplify remove api
amplify push
amplify add api and use my backup schema
amplify push
used cli version is 8.0.2 and v2 transformer

@josefaidt
Copy link
Contributor

Hey @lehoai no worries on sending the schema. We will continue to investigate this issue.

@naingaungphyo are you seeing any CloudFormation errors outputted to your terminal and would you mind sharing the logs at ~/.amplify/logs the day this occurred? And finally, approximately how many changes were applied prior to receiving this error?

@Tshetrim
Copy link

Tshetrim commented Apr 7, 2023

@AnilMaktala Hello, so I went to S3 and deleted the deployment-state.json like others had it.

After doing so, the push did go through this time - I was not immediately stopped by the index issue. Unfortunately, after it tried to deploy it for a bit, just like the first time, the same error eventually came up.

image

I tried it twice for good measure.

image

image

@KenObie
Copy link

KenObie commented May 3, 2023

So...the solution is to stop using Amplify? Cheers

@judygab
Copy link

judygab commented May 17, 2023

Hey @hackmajoris 👋, We are able to reproduce the issue with the provided steps and below schema. Steps to reproduce:

Here is the initial schema:

type ShorterLink @model {
  id: ID! @primaryKey(sortKeyFields: ["createdAt"])
  name: String!
  createdAt: String!
  shorterUrl: String
  originalUrl: String
  clicks: Int
  lastOpen: String
  description: String
  info: String
  logs: [Log] @hasMany
}


  type Log @model {
    id: ID! @primaryKey(sortKeyFields: ["createdAt"])
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push -y After push, update Log model(remove sotKeyFields)

  type Log @model  {
    id: ID! @primaryKey
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push --allow-destructive-graphql-schema-updates

image

So what is the right way of updating primary keys on a table? I am also encountering the same error when trying to do so

@judygab
Copy link

judygab commented May 18, 2023

For anyone still wondering, what worked for me was just deploying without the table that you are to change primaryKey of and than adding the updated table and re-deployed. Figured since the table will be deleted regardless because of the change of the index, I would delete it myself because just updating the primaryKey didn't work for me.

@Tshetrim
Copy link

@judygab Hi Judy, thanks for the heads up. I've tried deleting the table thorough Dynamo before then deploying but I still got the same error.

Just to clarify, if you can, did you remove the table from your schema, delete the table through Dynamo, and then recompiled and deployed?

And then did that successfully deploy, and then you were you able to add in the table back to your schema and deploy successfully?

This was the sort of loop around I was hoping to find, so awesome if this works!

@KenObie
Copy link

KenObie commented May 20, 2023

For those who are still stuck. The root cause is a stuck deployment.json file that stores the deployment status. Ignore this horrible design choice and follow these steps.

  1. Rollback deployment using the cli
  2. Delete deployment.json in the root of the amplify s3 directory
  3. Redeploy
  4. Delete primary index and redeploy (1 table at a time).

@judygab
Copy link

judygab commented May 22, 2023

@judygab Hi Judy, thanks for the heads up. I've tried deleting the table thorough Dynamo before then deploying but I still got the same error.

Just to clarify, if you can, did you remove the table from your schema, delete the table through Dynamo, and then recompiled and deployed?

And then did that successfully deploy, and then you were you able to add in the table back to your schema and deploy successfully?

This was the sort of loop around I was hoping to find, so awesome if this works!

Yes, so my steps were:

  1. Remove the table from schema
  2. Re-deploy(deployment was successful)
  3. Add the table back with updated primary keys
  4. Delete deployment.json from s3 bucket(I tried re-deploying but it got stuck with no errors, so wasn't able to stop it or try again because of stored deployment state)
  5. Re-deploy

@malmgrens4
Copy link

Ran into this as well. Deleting the deployment-status.json in s3 didn't work for me.
Ended up making a backup of my changes and pulling back down from the server with amplify pull to overwrite my local changes.
Then redeploying changes incrementally.
Really need amplify to have more detailed errors and better tolerance for multiple changes. They're bound to happen in early stages of a project.

@omegabyte
Copy link

What's the point --force if doesn't force?

@hackmajoris
Copy link

C'mon guys. Please prioritise this critical issue.

@mgrabka
Copy link

mgrabka commented Jun 23, 2023

Ran into this as well. Deleting the deployment-status.json in s3 didn't work for me.
Ended up making a backup of my changes and pulling back down from the server with amplify pull to overwrite my local changes.
Then redeploying changes incrementally.
Really need amplify to have more detailed errors and better tolerance for multiple changes. They're bound to happen in early stages of a project.

Ran into this as well, and it's funny because even pulling and trying to push afterwards doesn't work for me.

@nxia416
Copy link

nxia416 commented Jun 24, 2023

Having the same problem.

@malmgrens4
Copy link

One issue I ran into was trying to change which field the primary key was. Couldn't push after that.

@chrisl777
Copy link

I ran into the same issue.

@hackmajoris
Copy link

hackmajoris commented Jul 6, 2023

I ran into the same issue.

If loosing the existing data is not a problem, you can delete the existing schema/table and then re-create it again with the required keyes.

@evan1108
Copy link

I had the same problem. Removing the tables and re-adding one by one with the updated primary key as described above worked for me.

@chrisl777
Copy link

In my case, I had the above error message with Storage, not the API.

What helped in my case was to run amplify update storage and run through all my settings.

At first, when changing settings, such as updating permissions, or adding a trigger, I was getting an error: Resolution error: statement.freeze is not a function aws amplify.

In my case, I think what was causing the issue was that I had a Lambda trigger for S3 that was not set up properly, I removed the link between S3 and the Lambda tigger and re-linked it. I also have an Admin group for Auth, and I needed to add permissions for Storage for that group as well. I also had an override.ts for my Storage where I had a policy:

   resources.s3GuestReadPolicy.policyDocument.statements.push({
     Effect: "Allow",
     Action: "s3:GetObject",
     Resource: `${resources.s3Bucket.attrArn}/public/*` 
   })

I think this policy may have been conflicting with the cli-inputs, so I commented out this policy.

After making these changes, I stopped getting errors when running amplify update storage and then my backend build succeeded.

Just putting it out there since I didn't see anyone mention Storage as throwing the error message in the original post above.

@donkee1982
Copy link

I'm not sure if this will be helpful or if it might be a special case, but I'll describe the situation where I resolved the same error.
In conclusion, I checked and modified the contents of team-provider-info.json, and after doing so, the error was resolved, and the deployment succeeded.
In more detail, in my case, I was sharing a single backend between two applications, and there was a discrepancy in the environment variables of the functions section in team-provider-info.json. Once I corrected this, the error was resolved.
It seems that even after running the amplify pull command, the team-provider-info.json was not updated, leading to this situation.
This aspect was not shown in the logs of amplify console. I noticed this purely by chance.
I hope this proves useful to someone

@hackmajoris
Copy link

hackmajoris commented Sep 4, 2023

I'm not sure if this will be helpful or if it might be a special case, but I'll describe the situation where I resolved the same error. In conclusion, I checked and modified the contents of team-provider-info.json, and after doing so, the error was resolved, and the deployment succeeded. In more detail, in my case, I was sharing a single backend between two applications, and there was a discrepancy in the environment variables of the functions section in team-provider-info.json. Once I corrected this, the error was resolved. It seems that even after running the amplify pull command, the team-provider-info.json was not updated, leading to this situation. This aspect was not shown in the logs of amplify console. I noticed this purely by chance. I hope this proves useful to someone

Could you check your solution on these reproductio steps?
#92 (comment)

@nikolaigeorgie
Copy link

Not sure if anyone saw this but stack overflow top answer laid out best options. For me any change to schema.graphql would fail with the same error here. I had to

  1. delete delete deployment-state.json in the s3 bucket for the deployment.
  2. amplify push --force (failed for same reason)
  3. step 1 again
  4. amplify push --force

And the second time worked 🍭 . So perhaps deleting it 2-3 times.. is the trick.. 👀 lol

@drewjhart
Copy link

Has there been any movement on this? I am getting this whenever I want to add an @index to an existing table. Currently, I am recreating the env as a workaround, but that is less than ideal because anything in the DynamoDB tables need to be deleted.

@KarthikPoonjar
Copy link

What worked for me is the following:

  1. First remove all the fields having @hasmany and @belongsTo , push
  2. Add/remove primaryKey field and push

@djom202
Copy link

djom202 commented May 3, 2024

Today i had the same issue, looking for a soluction or workarround, I found that one of thing that produce this issues is a conflict change in the infra code, so I think that the env in the cloud was working fine, I just create a backup from Amplify folder in order to don't forget which changes were made so I delete the amplify folder and I just pull the entire env. It's just a band-aid; it's only a temporary fix until a permanent solution is found.

  • I hope it serves you well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests