-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"file" provisioner doesn't detect changes when copying a directory #6065
Comments
+1. I've experienced this as well. I added a I think that this is by design as per the documentation on provisioners, but it would be nice to be able to have provisioners run without destroying infrastructure for things that can't easily be managed by configuration management tools like CoreOS instances. |
Here's a module I use in my project as a workaround. It has the following dependencies:
usage.tf module "myinstance__pathsync" {
source = "./utils/pathsync"
local_path = "some-dir/"
remote_path = "/etc/some-dir"
host = "${aws_instance.myinstance.public_ip}"
user = "..."
private_key = "..."
} pathsync.tf variable "local_path" { type = "string" }
variable "remote_path" { type = "string" }
variable "host" { type = "string" }
variable "user" { type = "string" }
variable "private_key" { type = "string" }
resource "null_resource" "provisioner_container" {
triggers {
host = "${var.host}"
md5 = "${data.external.md5path.result.md5}"
}
connection {
host = "${var.host}"
user = "${var.user}"
private_key = "${var.private_key}"
}
provisioner "file" {
source = "${var.local_path}"
destination = "${var.remote_path}"
}
}
data "external" "md5path" {
program = ["bash", "${path.module}/md5path.sh"]
query = { path = "${var.local_path}" }
} md5path.sh #!/bin/bash
set -ueo pipefail
query_path=$(cat | jq -r '.path')
md5=$(tar -cf - $query_path | md5sum | cut -d' ' -f1)
printf "{\\\"md5\\\":\\\"$md5\\\"}" |
Hi all! Sorry this wasn't as easy as it could've been. It is actually by design that the In the case of the ### Hypothetical example. Not valid yet! ###
resource "ssh_file_tree" "example" {
host = "${var.host}"
user = "${var.user}"
private_key = "${var.private_key}"
source_dir = "${var.local_path}"
destination_dir = "${var.remote_path}"
} It is unfortunately not as simple as just adding the above resource, since as defined there it would have the same problem as the provisioner: it would run only once on creation. To fix that, we must have a way for the configuration to include some description of the contents of the files, as @philippevk did with the external script to take an MD5 of a tar archive. You can see this same problem in the design of the So there's some work to do to meet this use-case in a convenient way, but it does seem like a valid use-case to me. I expect this would lead also to requests to take some action after the files are uploaded (such as to send In the meantime the workaround of using a |
@apparentlymart I recently hit this issue and was wondering if there was any progress in the meantime? I am also not sure if my problem is the same that you addressed here: I have a If I run |
Hi @Crapworks, Any time a resource is planned for replacement ( |
I tried to using the archive_file data source to obtain a hash tied to the contents of a directory. My goal was to use the hash to trigger downstream resource updates as appropriate. Unfortunately, it looks like the only archive format supported by archive_file is zip and zip results in a different hash for the same contents every time it is run. |
I used the following trigger to execute the null_resource every time.
|
The workaround above can only show the hash changing, for single files if you do
you get a diff at plan time. |
If I use a "file" provisioner to copy a directory:
Changes to files within that directory do not trigger a rebuild - or in other words,
terraform plan
says there are no changes to make.(Currently using 0.6.11)
The text was updated successfully, but these errors were encountered: