-
Notifications
You must be signed in to change notification settings - Fork 256
Azure Blob Storage support #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hey @kraihn, |
689e52c to
807710e
Compare
|
Can you taken another look @davewongillies? |
|
@azr, can you take a look at my PR? I noticed you've merge a few other PRs recently. |
|
Hey @kraihn, thanks for opening this ! I think this is a very valuable PR but I'm having conflicting thoughts because of #193. One thing I had in mind would be to split the dependencies into submodules so say: So that users can decide what they want to do ? Another thing to note here is that there's now a v2 branch with some breaking changes and if you wanted to apply my suggestion then we'd need to merge this on it, because I don't want to break anything in master, and the full scope of this is going to be a breaking change. |
|
This is amazing. when will it be available on the main branch? |
Adds a Detector and getter for Azure Blob Storage with dependencies on: github.com/Azure/go-autorest/autorest/azure github.com/Azure/azure-storage-go An access key is required for the SDK Client, public blobs should be able to make use of the http Getter anyway.
Hey @azr, I don't write Go for my day job, so any submodule splitting would be out of my comfort zone. This doesn't look to me as though it's breaking anything in master. How long until v2 will be complete? Do you want me to wait for that, or can we merge this into the current version to keep moving? |
| //// Parse URL | ||
| //accountName, baseURL, containerName, blobPath, accessKey, err := g.parseUrl(u) | ||
| //if err != nil { | ||
| // return 0, err | ||
| //} | ||
| // | ||
| //client, err := g.getBobClient(accountName, baseURL, accessKey) | ||
| //if err != nil { | ||
| // return 0, err | ||
| //} | ||
| // | ||
| //container := client.GetContainerReference(containerName) | ||
| // | ||
| //containerReference := storage.GetContainerReference(containerName) | ||
| //blobReference := containerReference.GetBlobReference(c.keyName) | ||
| //options := &storage.GetBlobOptions{} | ||
| // | ||
| //// List the object(s) at the given prefix | ||
| //params := storage.ListBlobsParameters{ | ||
| // Prefix: blobPath, | ||
| //} | ||
| //resp, err := container.ListBlobs(params) | ||
| //if err != nil { | ||
| // return 0, err | ||
| //} | ||
| // | ||
| //for _, b := range resp.Blobs { | ||
| // // Use file mode on exact match. | ||
| // if b.Name == blobPath { | ||
| // return ClientModeFile, nil | ||
| // } | ||
| // | ||
| // // Use dir mode if child keys are found. | ||
| // if strings.HasPrefix(b.Name, blobPath+"/") { | ||
| // return ClientModeDir, nil | ||
| // } | ||
| //} | ||
| // | ||
| //// There was no match, so just return file mode. The download is going | ||
| //// to fail but we will let Azure return the proper error later. | ||
| //return ClientModeFile, nil | ||
| //ClientModeFile := nil | ||
|
|
||
| // From the Azure portal, get your storage account name and key and set environment variables. | ||
| //accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY") | ||
| //if len(accountName) == 0 || len(accountKey) == 0 { | ||
| // log.Fatal("Either the AZURE_STORAGE_ACCOUNT or AZURE_STORAGE_ACCESS_KEY environment variable is not set") | ||
| //} | ||
| // | ||
| //// Create a default request pipeline using your storage account name and account key. | ||
| //credential, err := azblob.NewSharedKeyCredential(accountName, accountKey) | ||
| //if err != nil { | ||
| // log.Fatal("Invalid credentials with error: " + err.Error()) | ||
| //} | ||
| //p := azblob.NewPipeline(credential, azblob.PipelineOptions{}) | ||
| // | ||
| //// Create a random string for the quick start container | ||
| //containerName := fmt.Sprintf("quickstart-%s", randomString()) | ||
| // | ||
| //// From the Azure portal, get your storage account blob service URL endpoint. | ||
| //URL, _ := url.Parse( | ||
| // fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName)) | ||
| // | ||
| //// Create a ContainerURL object that wraps the container URL and a request | ||
| //// pipeline to make requests. | ||
| //containerURL := azblob.NewContainerURL(*URL, p) | ||
| // | ||
| //// Create the container | ||
| //fmt.Printf("Creating a container named %s\n", containerName) | ||
| //ctx := context.Background() // This example uses a never-expiring context | ||
| //_, err = containerURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone) | ||
| //handleErrors(err) | ||
| // | ||
| // | ||
| // | ||
| // | ||
| //// Here's how to download the blob | ||
| //downloadResponse, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false) | ||
| // | ||
| //// NOTE: automatically retries are performed if the connection fails | ||
| //bodyStream := downloadResponse.Body(azblob.RetryReaderOptions{MaxRetryRequests: 20}) | ||
| // | ||
| //// read the body into a buffer | ||
| //downloadedData := bytes.Buffer{} | ||
| //_, err = downloadedData.ReadFrom(bodyStream) | ||
| //handleErrors(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we keep this commented code ? Is this a Work In Progress ? ( Ditto throughout )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it because the blog getter doesn't allow to download folders ?
| "strings" | ||
| "log" | ||
| "bytes" | ||
| // |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // |
|
@kraihn, sorry for the lag here, I have been heavily focusing on a Packer release. I think the best option here is to merge this one in master and then we'll take it from there to move it to the v2 branch. Do you want to continue working on this ? |
|
This is blocking some work on our Cloud Service Broker project (https://github.com/pivotal/cloud-service-broker), we would sincerely appreciate you accepting this PR into master! |
|
@azr, I would love to see this merged in, but I don’t have the time to continue working on it. |
|
@omerbensaadon is there anything you need to keep working on this? I can try to help you with that. For now, it seems that is missing the code for dealing with directories if that's something possible to do with the blob getter. You can see @azr review to understand better where the code is supposed to be. |
|
@sylviamoss thank you for the clarification! We'll get this work prioritized right away! |
|
Hello @omerbensaadon, really cool ! Is everything working out here, do you need some help ? |
|
@azr thanks for the bump... this is on the roadmap, will be worked sometime in the next few weeks. Sorry for the confusion, still new to some of this context, hopefully didn't annoy you too much! 😝 |
|
Got it, not annoyed at all on the contrary, I'm pretty happy you want to take care of this 🙂 ! Thanks for your time ! We're always here if you need more help. |
|
Hi guys ! What is the status of that ? What's the remaining work to be done ? |
|
@omerbensaadon just to get clarification, I'm guessing pivotal isn't looking at supporting this PR anymore ? |
|
Hi All, Just wondering if anyone is working on this any more? My team would love to use this to store our modules. |
Same here! Is anyone still working on adding this support ? |
|
Is there anything outstanding for this to be approved? I can try to help if necessary. |
|
How can I help with this? @azr @davewongillies @kraihn |
|
wanting to be able to pull modules from blob storage as well....any progress on this? |
This pull request adds support for Azure Blob Storage. Access is provided by an environment variables of the account's Access Token, or an SAS Token within the source URL.
Fixes #33
Replaces #56