From 68d465fdc9891b3861ef07b12ae57295c5b2d6d5 Mon Sep 17 00:00:00 2001 From: Tamer Sherif <69483382+tasherif-msft@users.noreply.github.com> Date: Mon, 24 Jul 2023 10:59:33 -0700 Subject: [PATCH] [AzDatalake] Cleanup + Improvements (#21222) * Enable gocritic during linting (#20715) Enabled gocritic's evalOrder to catch dependencies on undefined behavior on return statements. Updated to latest version of golangci-lint. Fixed issue in azblob flagged by latest linter. * Cosmos DB: Enable merge support (#20716) * Adding header and value * Wiring and tests * format * Fixing value * change log * [azservicebus, azeventhubs] Stress test and logging improvement (#20710) Logging improvements: * Updating the logging to print more tracing information (per-link) in prep for the bigger release coming up. * Trimming out some of the verbose logging, seeing if I can get it a bit more reasonable. Stress tests: * Add a timestamp to the log name we generate and also default to append, not overwrite. * Use 0.5 cores, 0.5GB as our baseline. Some pods use more and I'll tune them more later. * update proxy version (#20712) Co-authored-by: Scott Beddall * Return an error when you try to send a message that's too large. (#20721) This now works just like the message batch - you'll get an ErrMessageTooLarge if you attempt to send a message that's too large for the link's configured size. NOTE: there's a patch to `internal/go-amqp/Sender.go` to match what's in go-amqp's main so it returns a programmatically useful error when the message is too large. Fixes #20647 * Changes in test that is failing in pipeline (#20693) * [azservicebus, azeventhubs] Treat 'entity full' as a fatal error (#20722) When the remote entity is full we get a resource-limit-exceeded condition. This isn't something we should keep retrying on and it's best to just abort and let the user know immediately, rather than hoping it might eventually clear out. This affected both Event Hubs and Service Bus. Fixes #20647 * [azservicebus/azeventhubs] Redirect stderr and stdout to tee (#20726) * Update changelog with latest features (#20730) * Update changelog with latest features Prepare for upcoming release. * bump minor version * pass along the artifact name so we can override it later (#20732) Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> * [azeventhubs] Fixing checkpoint store race condition (#20727) The checkpoint store wasn't guarding against multiple owners claiming for the first time - fixing this by using IfNoneMatch Fixes #20717 * Fix azidentity troubleshooting guide link (#20736) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 (#20437) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 generation from spec commit: 85fb4ac6f8bfefd179e6c2632976a154b5c9ff04 * client factory * fix * fix * update * add sdk/resourcemanager/postgresql/armpostgresql live test (#20685) * add sdk/resourcemanager/postgresql/armpostgresql live test * update assets.json * set subscriptionId default value * format * add sdk/resourcemanager/eventhub/armeventhub live test (#20686) * add sdk/resourcemanager/eventhub/armeventhub live test * update assets * add sdk/resourcemanager/compute/armcompute live test (#20048) * add sdk/resourcemanager/compute/armcompute live test * skus filter * fix subscriptionId default value * fix * gofmt * update recording * sdk/resourcemanager/network/armnetwork live test (#20331) * sdk/resourcemanager/network/armnetwork live test * update subscriptionId default value * update recording * add sdk/resourcemanager/cosmos/armcosmos live test (#20705) * add sdk/resourcemanager/cosmos/armcosmos live test * update assets.json * update assets.json * update assets.json * update assets.json * Increment package version after release of azcore (#20740) * [azeventhubs] Improperly resetting etag in the checkpoint store (#20737) We shouldn't be resetting the etag to nil - it's what we use to enforce a "single winner" when doing ownership claims. The bug here was two-fold: I had bad logic in my previous claim ownership, which I fixed in a previous PR, but we need to reflect that same constraint properly in our in-memory checkpoint store for these tests. * Eng workflows sync and branch cleanup additions (#20743) Co-authored-by: James Suplizio * [azeventhubs] Latest start position can also be inclusive (ie, get the latest message) (#20744) * Update GitHubEventProcessor version and remove pull_request_review procesing (#20751) Co-authored-by: James Suplizio * Rename DisableAuthorityValidationAndInstanceDiscovery (#20746) * fix (#20707) * AzFile (#20739) * azfile: Fixing connection string parsing logic (#20798) * Fixing connection string parse logic * Update README * [azadmin] fix flaky test (#20758) * fix flaky test * charles suggestion * Prepare azidentity v1.3.0 for release (#20756) * Fix broken podman link (#20801) Co-authored-by: Wes Haggard * [azquery] update doc comments (#20755) * update doc comments * update statistics and visualization generation * prep-for-release * Fixed contribution section (#20752) Co-authored-by: Bob Tabor * [azeventhubs,azservicebus] Some API cleanup, renames (#20754) * Adding options to UpdateCheckpoint(), just for future potential expansion * Make Offset an int64, not a *int64 (it's not optional, it'll always come back with ReceivedEvents) * Adding more logging into the checkpoint store. * Point all imports at the production go-amqp * Add supporting features to enable distributed tracing (#20301) (#20708) * Add supporting features to enable distributed tracing This includes new internal pipeline policies and other supporting types. See the changelog for a full description. Added some missing doc comments. * fix linter issue * add net.peer.name trace attribute sequence custom HTTP header policy before logging policy. sequence logging policy after HTTP trace policy. keep body download policy at the end. * add span for iterating over pages * Restore ARM CAE support for azcore beta (#20657) This reverts commit 902097226ff3fe2fc6c3e7fc50d3478350253614. * Upgrade to stable azcore (#20808) * Increment package version after release of data/azcosmos (#20807) * Updating changelog (#20810) * Add fake package to azcore (#20711) * Add fake package to azcore This is the supporting infrastructure for the generated SDK fakes. * fix doc comment * Updating CHANGELOG.md (#20809) * changelog (#20811) * Increment package version after release of storage/azfile (#20813) * Update changelog (azblob) (#20815) * Updating CHANGELOG.md * Update the changelog with correct version * [azquery] migration guide (#20742) * migration guide * Charles feedback * Richard feedback --------- Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> * Increment package version after release of monitor/azquery (#20820) * [keyvault] prep for release (#20819) * prep for release * perf tests * update date * fixed datalake errors + moved to internal path * delegation key + constants * removed test * further cleanup * renamed support and client fixes * added tests * handle error for rename * fixed response formatting * cleanup --------- Co-authored-by: Joel Hendrix Co-authored-by: Matias Quaranta Co-authored-by: Richard Park <51494936+richardpark-msft@users.noreply.github.com> Co-authored-by: Azure SDK Bot <53356347+azure-sdk@users.noreply.github.com> Co-authored-by: Scott Beddall Co-authored-by: siminsavani-msft <77068571+siminsavani-msft@users.noreply.github.com> Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> Co-authored-by: Peng Jiahui <46921893+Alancere@users.noreply.github.com> Co-authored-by: James Suplizio Co-authored-by: Sourav Gupta <98318303+souravgupta-msft@users.noreply.github.com> Co-authored-by: gracewilcox <43627800+gracewilcox@users.noreply.github.com> Co-authored-by: Wes Haggard Co-authored-by: Bob Tabor Co-authored-by: Bob Tabor --- sdk/storage/azdatalake/assets.json | 2 +- sdk/storage/azdatalake/common.go | 45 ++ .../azdatalake/datalakeerror/error_codes.go | 240 ++++---- sdk/storage/azdatalake/directory/client.go | 17 +- sdk/storage/azdatalake/directory/constants.go | 33 +- sdk/storage/azdatalake/directory/models.go | 214 +------ sdk/storage/azdatalake/directory/responses.go | 25 +- sdk/storage/azdatalake/file/client.go | 130 ++-- sdk/storage/azdatalake/file/client_test.go | 554 +++++++++--------- sdk/storage/azdatalake/file/constants.go | 54 +- sdk/storage/azdatalake/file/models.go | 260 ++------ sdk/storage/azdatalake/file/responses.go | 59 +- sdk/storage/azdatalake/filesystem/client.go | 58 +- .../azdatalake/filesystem/client_test.go | 6 +- .../azdatalake/filesystem/constants.go | 45 -- .../azdatalake/filesystem/responses.go | 8 +- sdk/storage/azdatalake/go.mod | 2 +- sdk/storage/azdatalake/go.sum | 4 +- .../azdatalake/internal/base/clients.go | 50 +- .../azdatalake/internal/exported/exported.go | 19 + .../azdatalake/internal/exported/path.go | 1 - .../exported/user_delegation_credential.go | 4 +- .../internal/generated/user_delegation_key.go | 144 +++++ .../azdatalake/internal/path/constants.go | 35 ++ .../azdatalake/internal/path/models.go | 243 ++++++++ .../azdatalake/internal/path/responses.go | 269 +++++++++ .../azdatalake/internal/testcommon/common.go | 4 +- sdk/storage/azdatalake/lease/constants.go | 51 -- sdk/storage/azdatalake/service/client.go | 75 +-- sdk/storage/azdatalake/service/client_test.go | 33 +- sdk/storage/azdatalake/service/models.go | 30 +- sdk/storage/azdatalake/service/responses.go | 7 +- 32 files changed, 1538 insertions(+), 1183 deletions(-) delete mode 100644 sdk/storage/azdatalake/internal/exported/path.go create mode 100644 sdk/storage/azdatalake/internal/generated/user_delegation_key.go create mode 100644 sdk/storage/azdatalake/internal/path/constants.go create mode 100644 sdk/storage/azdatalake/internal/path/models.go create mode 100644 sdk/storage/azdatalake/internal/path/responses.go delete mode 100644 sdk/storage/azdatalake/lease/constants.go diff --git a/sdk/storage/azdatalake/assets.json b/sdk/storage/azdatalake/assets.json index e712ffb1d71d..0f87599cacfb 100644 --- a/sdk/storage/azdatalake/assets.json +++ b/sdk/storage/azdatalake/assets.json @@ -2,5 +2,5 @@ "AssetsRepo": "Azure/azure-sdk-assets", "AssetsRepoPrefixPath": "go", "TagPrefix": "go/storage/azdatalake", - "Tag": "go/storage/azdatalake_820b86faa9" + "Tag": "go/storage/azdatalake_db1de4a48b" } \ No newline at end of file diff --git a/sdk/storage/azdatalake/common.go b/sdk/storage/azdatalake/common.go index fc67050f51ee..6baefa3c6857 100644 --- a/sdk/storage/azdatalake/common.go +++ b/sdk/storage/azdatalake/common.go @@ -7,6 +7,7 @@ package azdatalake import ( + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" ) @@ -34,3 +35,47 @@ func ParseURL(u string) (URLParts, error) { // ending at offset+count. A zero-value HTTPRange indicates the entire resource. An HTTPRange // which has an offset but no zero value count indicates from the offset to the resource's end. type HTTPRange = exported.HTTPRange + +// ===================================== LEASE CONSTANTS ============================================================ + +// StatusType defines values for StatusType +type StatusType = lease.StatusType + +const ( + StatusTypeLocked StatusType = lease.StatusTypeLocked + StatusTypeUnlocked StatusType = lease.StatusTypeUnlocked +) + +// PossibleStatusTypeValues returns the possible values for the StatusType const type. +func PossibleStatusTypeValues() []StatusType { + return lease.PossibleStatusTypeValues() +} + +// DurationType defines values for DurationType +type DurationType = lease.DurationType + +const ( + DurationTypeInfinite DurationType = lease.DurationTypeInfinite + DurationTypeFixed DurationType = lease.DurationTypeFixed +) + +// PossibleDurationTypeValues returns the possible values for the DurationType const type. +func PossibleDurationTypeValues() []DurationType { + return lease.PossibleDurationTypeValues() +} + +// StateType defines values for StateType +type StateType = lease.StateType + +const ( + StateTypeAvailable StateType = lease.StateTypeAvailable + StateTypeLeased StateType = lease.StateTypeLeased + StateTypeExpired StateType = lease.StateTypeExpired + StateTypeBreaking StateType = lease.StateTypeBreaking + StateTypeBroken StateType = lease.StateTypeBroken +) + +// PossibleStateTypeValues returns the possible values for the StateType const type. +func PossibleStateTypeValues() []StateType { + return lease.PossibleStateTypeValues() +} diff --git a/sdk/storage/azdatalake/datalakeerror/error_codes.go b/sdk/storage/azdatalake/datalakeerror/error_codes.go index a40e54cebe37..b482c9b3f929 100644 --- a/sdk/storage/azdatalake/datalakeerror/error_codes.go +++ b/sdk/storage/azdatalake/datalakeerror/error_codes.go @@ -15,7 +15,7 @@ import ( // HasCode returns true if the provided error is an *azcore.ResponseError // with its ErrorCode field equal to one of the specified Codes. -func HasCode(err error, codes ...Code) bool { +func HasCode(err error, codes ...StorageErrorCode) bool { var respErr *azcore.ResponseError if !errors.As(err, &respErr) { return false @@ -32,23 +32,16 @@ func HasCode(err error, codes ...Code) bool { type StorageErrorCode string -// Code - Error codes returned by the service -type Code = bloberror.Code - +// dfs errors const ( ContentLengthMustBeZero StorageErrorCode = "ContentLengthMustBeZero" - PathAlreadyExists StorageErrorCode = "PathAlreadyExists" InvalidFlushPosition StorageErrorCode = "InvalidFlushPosition" InvalidPropertyName StorageErrorCode = "InvalidPropertyName" InvalidSourceURI StorageErrorCode = "InvalidSourceUri" UnsupportedRestVersion StorageErrorCode = "UnsupportedRestVersion" - FileSystemNotFound StorageErrorCode = "FilesystemNotFound" - PathNotFound StorageErrorCode = "PathNotFound" RenameDestinationParentPathNotFound StorageErrorCode = "RenameDestinationParentPathNotFound" SourcePathNotFound StorageErrorCode = "SourcePathNotFound" DestinationPathIsBeingDeleted StorageErrorCode = "DestinationPathIsBeingDeleted" - FileSystemAlreadyExists StorageErrorCode = "FilesystemAlreadyExists" - FileSystemBeingDeleted StorageErrorCode = "FilesystemBeingDeleted" InvalidDestinationPath StorageErrorCode = "InvalidDestinationPath" InvalidRenameSourcePath StorageErrorCode = "InvalidRenameSourcePath" InvalidSourceOrDestinationResourceType StorageErrorCode = "InvalidSourceOrDestinationResourceType" @@ -58,121 +51,122 @@ const ( SourcePathIsBeingDeleted StorageErrorCode = "SourcePathIsBeingDeleted" ) +// (converted) blob errors - these errors are what we expect after we do a replace on the error string using the ConvertBlobError function const ( - AccountAlreadyExists Code = "AccountAlreadyExists" - AccountBeingCreated Code = "AccountBeingCreated" - AccountIsDisabled Code = "AccountIsDisabled" - AppendPositionConditionNotMet Code = "AppendPositionConditionNotMet" - AuthenticationFailed Code = "AuthenticationFailed" - AuthorizationFailure Code = "AuthorizationFailure" - AuthorizationPermissionMismatch Code = "AuthorizationPermissionMismatch" - AuthorizationProtocolMismatch Code = "AuthorizationProtocolMismatch" - AuthorizationResourceTypeMismatch Code = "AuthorizationResourceTypeMismatch" - AuthorizationServiceMismatch Code = "AuthorizationServiceMismatch" - AuthorizationSourceIPMismatch Code = "AuthorizationSourceIPMismatch" - BlobAlreadyExists Code = "BlobAlreadyExists" - PathArchived Code = "BlobArchived" - PathBeingRehydrated Code = "BlobBeingRehydrated" - PathImmutableDueToPolicy Code = "BlobImmutableDueToPolicy" - PathNotArchived Code = "BlobNotArchived" - BlobNotFound Code = "BlobNotFound" - PathOverwritten Code = "BlobOverwritten" - PathTierInadequateForContentLength Code = "BlobTierInadequateForContentLength" - PathUsesCustomerSpecifiedEncryption Code = "BlobUsesCustomerSpecifiedEncryption" - BlockCountExceedsLimit Code = "BlockCountExceedsLimit" - BlockListTooLong Code = "BlockListTooLong" - CannotChangeToLowerTier Code = "CannotChangeToLowerTier" - CannotVerifyCopySource Code = "CannotVerifyCopySource" - ConditionHeadersNotSupported Code = "ConditionHeadersNotSupported" - ConditionNotMet Code = "ConditionNotMet" - FilesystemAlreadyExists Code = "ContainerAlreadyExists" - ContainerBeingDeleted Code = "ContainerBeingDeleted" - ContainerDisabled Code = "ContainerDisabled" - ContainerNotFound Code = "ContainerNotFound" - ContentLengthLargerThanTierLimit Code = "ContentLengthLargerThanTierLimit" - CopyAcrossAccountsNotSupported Code = "CopyAcrossAccountsNotSupported" - CopyIDMismatch Code = "CopyIdMismatch" - EmptyMetadataKey Code = "EmptyMetadataKey" - FeatureVersionMismatch Code = "FeatureVersionMismatch" - IncrementalCopyPathMismatch Code = "IncrementalCopyBlobMismatch" - IncrementalCopyOfEralierVersionSnapshotNotAllowed Code = "IncrementalCopyOfEralierVersionSnapshotNotAllowed" - IncrementalCopySourceMustBeSnapshot Code = "IncrementalCopySourceMustBeSnapshot" - InfiniteLeaseDurationRequired Code = "InfiniteLeaseDurationRequired" - InsufficientAccountPermissions Code = "InsufficientAccountPermissions" - InternalError Code = "InternalError" - InvalidAuthenticationInfo Code = "InvalidAuthenticationInfo" - InvalidBlobOrBlock Code = "InvalidBlobOrBlock" - InvalidPathTier Code = "InvalidBlobTier" - InvalidPathType Code = "InvalidBlobType" - InvalidBlockID Code = "InvalidBlockId" - InvalidBlockList Code = "InvalidBlockList" - InvalidHTTPVerb Code = "InvalidHttpVerb" - InvalidHeaderValue Code = "InvalidHeaderValue" - InvalidInput Code = "InvalidInput" - InvalidMD5 Code = "InvalidMd5" - InvalidMetadata Code = "InvalidMetadata" - InvalidOperation Code = "InvalidOperation" - InvalidPageRange Code = "InvalidPageRange" - InvalidQueryParameterValue Code = "InvalidQueryParameterValue" - InvalidRange Code = "InvalidRange" - InvalidResourceName Code = "InvalidResourceName" - InvalidSourcePathType Code = "InvalidSourceBlobType" - InvalidSourcePathURL Code = "InvalidSourceBlobUrl" - InvalidURI Code = "InvalidUri" - InvalidVersionForPageBlobOperation Code = "InvalidVersionForPageBlobOperation" - InvalidXMLDocument Code = "InvalidXmlDocument" - InvalidXMLNodeValue Code = "InvalidXmlNodeValue" - LeaseAlreadyBroken Code = "LeaseAlreadyBroken" - LeaseAlreadyPresent Code = "LeaseAlreadyPresent" - LeaseIDMismatchWithBlobOperation Code = "LeaseIdMismatchWithBlobOperation" - LeaseIDMismatchWithContainerOperation Code = "LeaseIdMismatchWithContainerOperation" - LeaseIDMismatchWithLeaseOperation Code = "LeaseIdMismatchWithLeaseOperation" - LeaseIDMissing Code = "LeaseIdMissing" - LeaseIsBreakingAndCannotBeAcquired Code = "LeaseIsBreakingAndCannotBeAcquired" - LeaseIsBreakingAndCannotBeChanged Code = "LeaseIsBreakingAndCannotBeChanged" - LeaseIsBrokenAndCannotBeRenewed Code = "LeaseIsBrokenAndCannotBeRenewed" - LeaseLost Code = "LeaseLost" - LeaseNotPresentWithBlobOperation Code = "LeaseNotPresentWithBlobOperation" - LeaseNotPresentWithContainerOperation Code = "LeaseNotPresentWithContainerOperation" - LeaseNotPresentWithLeaseOperation Code = "LeaseNotPresentWithLeaseOperation" - MD5Mismatch Code = "Md5Mismatch" - CRC64Mismatch Code = "Crc64Mismatch" - MaxBlobSizeConditionNotMet Code = "MaxBlobSizeConditionNotMet" - MetadataTooLarge Code = "MetadataTooLarge" - MissingContentLengthHeader Code = "MissingContentLengthHeader" - MissingRequiredHeader Code = "MissingRequiredHeader" - MissingRequiredQueryParameter Code = "MissingRequiredQueryParameter" - MissingRequiredXMLNode Code = "MissingRequiredXmlNode" - MultipleConditionHeadersNotSupported Code = "MultipleConditionHeadersNotSupported" - NoAuthenticationInformation Code = "NoAuthenticationInformation" - NoPendingCopyOperation Code = "NoPendingCopyOperation" - OperationNotAllowedOnIncrementalCopyBlob Code = "OperationNotAllowedOnIncrementalCopyBlob" - OperationTimedOut Code = "OperationTimedOut" - OutOfRangeInput Code = "OutOfRangeInput" - OutOfRangeQueryParameterValue Code = "OutOfRangeQueryParameterValue" - PendingCopyOperation Code = "PendingCopyOperation" - PreviousSnapshotCannotBeNewer Code = "PreviousSnapshotCannotBeNewer" - PreviousSnapshotNotFound Code = "PreviousSnapshotNotFound" - PreviousSnapshotOperationNotSupported Code = "PreviousSnapshotOperationNotSupported" - RequestBodyTooLarge Code = "RequestBodyTooLarge" - RequestURLFailedToParse Code = "RequestUrlFailedToParse" - ResourceAlreadyExists Code = "ResourceAlreadyExists" - ResourceNotFound Code = "ResourceNotFound" - ResourceTypeMismatch Code = "ResourceTypeMismatch" - SequenceNumberConditionNotMet Code = "SequenceNumberConditionNotMet" - SequenceNumberIncrementTooLarge Code = "SequenceNumberIncrementTooLarge" - ServerBusy Code = "ServerBusy" - SnapshotCountExceeded Code = "SnapshotCountExceeded" - SnapshotOperationRateExceeded Code = "SnapshotOperationRateExceeded" - SnapshotsPresent Code = "SnapshotsPresent" - SourceConditionNotMet Code = "SourceConditionNotMet" - SystemInUse Code = "SystemInUse" - TargetConditionNotMet Code = "TargetConditionNotMet" - UnauthorizedBlobOverwrite Code = "UnauthorizedBlobOverwrite" - UnsupportedHTTPVerb Code = "UnsupportedHttpVerb" - UnsupportedHeader Code = "UnsupportedHeader" - UnsupportedQueryParameter Code = "UnsupportedQueryParameter" - UnsupportedXMLNode Code = "UnsupportedXmlNode" + AccountAlreadyExists StorageErrorCode = "AccountAlreadyExists" + AccountBeingCreated StorageErrorCode = "AccountBeingCreated" + AccountIsDisabled StorageErrorCode = "AccountIsDisabled" + AppendPositionConditionNotMet StorageErrorCode = "AppendPositionConditionNotMet" + AuthenticationFailed StorageErrorCode = "AuthenticationFailed" + AuthorizationFailure StorageErrorCode = "AuthorizationFailure" + AuthorizationPermissionMismatch StorageErrorCode = "AuthorizationPermissionMismatch" + AuthorizationProtocolMismatch StorageErrorCode = "AuthorizationProtocolMismatch" + AuthorizationResourceTypeMismatch StorageErrorCode = "AuthorizationResourceTypeMismatch" + AuthorizationServiceMismatch StorageErrorCode = "AuthorizationServiceMismatch" + AuthorizationSourceIPMismatch StorageErrorCode = "AuthorizationSourceIPMismatch" + PathAlreadyExists StorageErrorCode = "PathAlreadyExists" + PathArchived StorageErrorCode = "PathArchived" + PathBeingRehydrated StorageErrorCode = "PathBeingRehydrated" + PathImmutableDueToPolicy StorageErrorCode = "PathImmutableDueToPolicy" + PathNotArchived StorageErrorCode = "PathNotArchived" + PathNotFound StorageErrorCode = "PathNotFound" + PathOverwritten StorageErrorCode = "PathOverwritten" + PathTierInadequateForContentLength StorageErrorCode = "PathTierInadequateForContentLength" + PathUsesCustomerSpecifiedEncryption StorageErrorCode = "PathUsesCustomerSpecifiedEncryption" + BlockCountExceedsLimit StorageErrorCode = "BlockCountExceedsLimit" + BlockListTooLong StorageErrorCode = "BlockListTooLong" + CannotChangeToLowerTier StorageErrorCode = "CannotChangeToLowerTier" + CannotVerifyCopySource StorageErrorCode = "CannotVerifyCopySource" + ConditionHeadersNotSupported StorageErrorCode = "ConditionHeadersNotSupported" + ConditionNotMet StorageErrorCode = "ConditionNotMet" + FilesystemAlreadyExists StorageErrorCode = "FilesystemAlreadyExists" + FilesystemBeingDeleted StorageErrorCode = "FilesystemBeingDeleted" + FilesystemDisabled StorageErrorCode = "FilesystemDisabled" + FilesystemNotFound StorageErrorCode = "FilesystemNotFound" + ContentLengthLargerThanTierLimit StorageErrorCode = "ContentLengthLargerThanTierLimit" + CopyAcrossAccountsNotSupported StorageErrorCode = "CopyAcrossAccountsNotSupported" + CopyIDMismatch StorageErrorCode = "CopyIdMismatch" + EmptyMetadataKey StorageErrorCode = "EmptyMetadataKey" + FeatureVersionMismatch StorageErrorCode = "FeatureVersionMismatch" + IncrementalCopyPathMismatch StorageErrorCode = "IncrementalCopyPathMismatch" + IncrementalCopyOfEarlierVersionSnapshotNotAllowed StorageErrorCode = "IncrementalCopyOfEarlierVersionSnapshotNotAllowed" + IncrementalCopySourceMustBeSnapshot StorageErrorCode = "IncrementalCopySourceMustBeSnapshot" + InfiniteLeaseDurationRequired StorageErrorCode = "InfiniteLeaseDurationRequired" + InsufficientAccountPermissions StorageErrorCode = "InsufficientAccountPermissions" + InternalError StorageErrorCode = "InternalError" + InvalidAuthenticationInfo StorageErrorCode = "InvalidAuthenticationInfo" + InvalidPathOrBlock StorageErrorCode = "InvalidPathOrBlock" + InvalidPathTier StorageErrorCode = "InvalidPathTier" + InvalidPathType StorageErrorCode = "InvalidPathType" + InvalidBlockID StorageErrorCode = "InvalidBlockId" + InvalidBlockList StorageErrorCode = "InvalidBlockList" + InvalidHTTPVerb StorageErrorCode = "InvalidHttpVerb" + InvalidHeaderValue StorageErrorCode = "InvalidHeaderValue" + InvalidInput StorageErrorCode = "InvalidInput" + InvalidMD5 StorageErrorCode = "InvalidMd5" + InvalidMetadata StorageErrorCode = "InvalidMetadata" + InvalidOperation StorageErrorCode = "InvalidOperation" + InvalidPageRange StorageErrorCode = "InvalidPageRange" + InvalidQueryParameterValue StorageErrorCode = "InvalidQueryParameterValue" + InvalidRange StorageErrorCode = "InvalidRange" + InvalidResourceName StorageErrorCode = "InvalidResourceName" + InvalidSourcePathType StorageErrorCode = "InvalidSourcePathType" + InvalidSourcePathURL StorageErrorCode = "InvalidSourcePathUrl" + InvalidURI StorageErrorCode = "InvalidUri" + InvalidVersionForPagePathOperation StorageErrorCode = "InvalidVersionForPagePathOperation" + InvalidXMLDocument StorageErrorCode = "InvalidXmlDocument" + InvalidXMLNodeValue StorageErrorCode = "InvalidXmlNodeValue" + LeaseAlreadyBroken StorageErrorCode = "LeaseAlreadyBroken" + LeaseAlreadyPresent StorageErrorCode = "LeaseAlreadyPresent" + LeaseIDMismatchWithPathOperation StorageErrorCode = "LeaseIdMismatchWithPathOperation" + LeaseIDMismatchWithFilesystemOperation StorageErrorCode = "LeaseIdMismatchWithFilesystemOperation" + LeaseIDMismatchWithLeaseOperation StorageErrorCode = "LeaseIdMismatchWithLeaseOperation" + LeaseIDMissing StorageErrorCode = "LeaseIdMissing" + LeaseIsBreakingAndCannotBeAcquired StorageErrorCode = "LeaseIsBreakingAndCannotBeAcquired" + LeaseIsBreakingAndCannotBeChanged StorageErrorCode = "LeaseIsBreakingAndCannotBeChanged" + LeaseIsBrokenAndCannotBeRenewed StorageErrorCode = "LeaseIsBrokenAndCannotBeRenewed" + LeaseLost StorageErrorCode = "LeaseLost" + LeaseNotPresentWithPathOperation StorageErrorCode = "LeaseNotPresentWithPathOperation" + LeaseNotPresentWithFilesystemOperation StorageErrorCode = "LeaseNotPresentWithFilesystemOperation" + LeaseNotPresentWithLeaseOperation StorageErrorCode = "LeaseNotPresentWithLeaseOperation" + MD5Mismatch StorageErrorCode = "Md5Mismatch" + CRC64Mismatch StorageErrorCode = "Crc64Mismatch" + MaxPathSizeConditionNotMet StorageErrorCode = "MaxPathSizeConditionNotMet" + MetadataTooLarge StorageErrorCode = "MetadataTooLarge" + MissingContentLengthHeader StorageErrorCode = "MissingContentLengthHeader" + MissingRequiredHeader StorageErrorCode = "MissingRequiredHeader" + MissingRequiredQueryParameter StorageErrorCode = "MissingRequiredQueryParameter" + MissingRequiredXMLNode StorageErrorCode = "MissingRequiredXmlNode" + MultipleConditionHeadersNotSupported StorageErrorCode = "MultipleConditionHeadersNotSupported" + NoAuthenticationInformation StorageErrorCode = "NoAuthenticationInformation" + NoPendingCopyOperation StorageErrorCode = "NoPendingCopyOperation" + OperationNotAllowedOnIncrementalCopyPath StorageErrorCode = "OperationNotAllowedOnIncrementalCopyPath" + OperationTimedOut StorageErrorCode = "OperationTimedOut" + OutOfRangeInput StorageErrorCode = "OutOfRangeInput" + OutOfRangeQueryParameterValue StorageErrorCode = "OutOfRangeQueryParameterValue" + PendingCopyOperation StorageErrorCode = "PendingCopyOperation" + PreviousSnapshotCannotBeNewer StorageErrorCode = "PreviousSnapshotCannotBeNewer" + PreviousSnapshotNotFound StorageErrorCode = "PreviousSnapshotNotFound" + PreviousSnapshotOperationNotSupported StorageErrorCode = "PreviousSnapshotOperationNotSupported" + RequestBodyTooLarge StorageErrorCode = "RequestBodyTooLarge" + RequestURLFailedToParse StorageErrorCode = "RequestUrlFailedToParse" + ResourceAlreadyExists StorageErrorCode = "ResourceAlreadyExists" + ResourceNotFound StorageErrorCode = "ResourceNotFound" + ResourceTypeMismatch StorageErrorCode = "ResourceTypeMismatch" + SequenceNumberConditionNotMet StorageErrorCode = "SequenceNumberConditionNotMet" + SequenceNumberIncrementTooLarge StorageErrorCode = "SequenceNumberIncrementTooLarge" + ServerBusy StorageErrorCode = "ServerBusy" + SnapshotCountExceeded StorageErrorCode = "SnapshotCountExceeded" + SnapshotOperationRateExceeded StorageErrorCode = "SnapshotOperationRateExceeded" + SnapshotsPresent StorageErrorCode = "SnapshotsPresent" + SourceConditionNotMet StorageErrorCode = "SourceConditionNotMet" + SystemInUse StorageErrorCode = "SystemInUse" + TargetConditionNotMet StorageErrorCode = "TargetConditionNotMet" + UnauthorizedPathOverwrite StorageErrorCode = "UnauthorizedPathOverwrite" + UnsupportedHTTPVerb StorageErrorCode = "UnsupportedHttpVerb" + UnsupportedHeader StorageErrorCode = "UnsupportedHeader" + UnsupportedQueryParameter StorageErrorCode = "UnsupportedQueryParameter" + UnsupportedXMLNode StorageErrorCode = "UnsupportedXmlNode" ) var ( diff --git a/sdk/storage/azdatalake/directory/client.go b/sdk/storage/azdatalake/directory/client.go index b48045888290..b1e0fc9dec56 100644 --- a/sdk/storage/azdatalake/directory/client.go +++ b/sdk/storage/azdatalake/directory/client.go @@ -50,7 +50,7 @@ func NewClient(directoryURL string, cred azcore.TokenCredential, options *Client ClientOptions: options.ClientOptions, } blobClient, _ := blockblob.NewClient(blobURL, cred, &blobClientOpts) - dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, &cred, (*base.ClientOptions)(conOptions)) return (*Client)(dirClient), nil } @@ -78,7 +78,7 @@ func NewClientWithNoCredential(directoryURL string, options *ClientOptions) (*Cl ClientOptions: options.ClientOptions, } blobClient, _ := blockblob.NewClientWithNoCredential(blobURL, &blobClientOpts) - dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, nil, (*base.ClientOptions)(conOptions)) return (*Client)(dirClient), nil } @@ -113,7 +113,7 @@ func NewClientWithSharedKeyCredential(directoryURL string, cred *SharedKeyCreden return nil, err } blobClient, _ := blockblob.NewClientWithSharedKeyCredential(blobURL, blobSharedKey, &blobClientOpts) - dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, cred, (*base.ClientOptions)(conOptions)) + dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, cred, nil, (*base.ClientOptions)(conOptions)) return (*Client)(dirClient), nil } @@ -158,6 +158,10 @@ func (d *Client) sharedKey() *exported.SharedKeyCredential { return base.SharedKeyComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(d)) } +func (d *Client) identityCredential() *azcore.TokenCredential { + return base.IdentityCredentialComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(d)) +} + // DFSURL returns the URL endpoint used by the Client object. func (d *Client) DFSURL() string { return d.generatedDirClientWithDFS().Endpoint() @@ -168,6 +172,8 @@ func (d *Client) BlobURL() string { return d.generatedDirClientWithBlob().Endpoint() } +//TODO: create method to get file client - this will require block blob to have a method to get another block blob + // Create creates a new directory (dfs1). func (d *Client) Create(ctx context.Context, options *CreateOptions) (CreateResponse, error) { return CreateResponse{}, nil @@ -230,8 +236,3 @@ func (d *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, op // TODO: call into blob return SetHTTPHeadersResponse{}, nil } - -// UndeletePath restores the specified path that was previously deleted. (dfs op/blob2). -func (d *Client) UndeletePath(ctx context.Context, path string, options *UndeletePathOptions) (UndeletePathResponse, error) { - return UndeletePathResponse{}, nil -} diff --git a/sdk/storage/azdatalake/directory/constants.go b/sdk/storage/azdatalake/directory/constants.go index ca7d9525c6ac..99aae6d16704 100644 --- a/sdk/storage/azdatalake/directory/constants.go +++ b/sdk/storage/azdatalake/directory/constants.go @@ -7,37 +7,12 @@ package directory import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) -type ResourceType = generated.PathResourceType +type EncryptionAlgorithmType = path.EncryptionAlgorithmType -// TODO: consider the possibility of not exposing this and just pass it under the hood const ( - ResourceTypeFile ResourceType = generated.PathResourceTypeFile - ResourceTypeDirectory ResourceType = generated.PathResourceTypeDirectory -) - -type RenameMode = generated.PathRenameMode - -// TODO: consider the possibility of not exposing this and just pass it under the hood -const ( - RenameModeLegacy RenameMode = generated.PathRenameModeLegacy - RenameModePosix RenameMode = generated.PathRenameModePosix -) - -type SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveMode - -const ( - SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeSet - SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeModify - SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeRemove -) - -type EncryptionAlgorithmType = blob.EncryptionAlgorithmType - -const ( - EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone - EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 + EncryptionAlgorithmTypeNone EncryptionAlgorithmType = path.EncryptionAlgorithmTypeNone + EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = path.EncryptionAlgorithmTypeAES256 ) diff --git a/sdk/storage/azdatalake/directory/models.go b/sdk/storage/azdatalake/directory/models.go index d8ad23d234f9..ff71be61b95d 100644 --- a/sdk/storage/azdatalake/directory/models.go +++ b/sdk/storage/azdatalake/directory/models.go @@ -7,9 +7,9 @@ package directory import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" "time" ) @@ -73,80 +73,8 @@ type RenameOptions struct { AccessConditions *AccessConditions } -// GetPropertiesOptions contains the optional parameters for the Client.GetProperties method -type GetPropertiesOptions struct { - AccessConditions *AccessConditions - CPKInfo *CPKInfo -} - -func (o *GetPropertiesOptions) format() *blob.GetPropertiesOptions { - if o == nil { - return nil - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - return &blob.GetPropertiesOptions{ - AccessConditions: accessConditions, - CPKInfo: &blob.CPKInfo{ - EncryptionKey: o.CPKInfo.EncryptionKey, - EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, - EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, - }, - } -} - // ===================================== PATH IMPORTS =========================================== -// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint -type SetAccessControlOptions struct { - // Owner is the owner of the path. - Owner *string - // Group is the owning group of the path. - Group *string - // ACL is the access control list for the path. - ACL *string - // Permissions is the octal representation of the permissions for user, group and mask. - Permissions *string - // AccessConditions contains parameters for accessing the path. - AccessConditions *AccessConditions -} - -func (o *SetAccessControlOptions) format() (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { - if o == nil { - return nil, nil, nil, nil - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) - return &generated.PathClientSetAccessControlOptions{ - Owner: o.Owner, - Group: o.Group, - ACL: o.ACL, - Permissions: o.Permissions, - }, leaseAccessConditions, modifiedAccessConditions, nil -} - -// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. -type GetAccessControlOptions struct { - // UPN is the user principal name. - UPN *bool - // AccessConditions contains parameters for accessing the path. - AccessConditions *AccessConditions -} - -func (o *GetAccessControlOptions) format() (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { - action := generated.PathGetPropertiesActionGetAccessControl - if o == nil { - return &generated.PathClientGetPropertiesOptions{ - Action: &action, - }, nil, nil, nil - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) - return &generated.PathClientGetPropertiesOptions{ - Upn: o.UPN, - Action: &action, - }, leaseAccessConditions, modifiedAccessConditions, nil -} - // SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. TODO: Design formatter type SetAccessControlRecursiveOptions struct { // ACL is the access control list for the path. @@ -204,131 +132,49 @@ func (o *RemoveAccessControlRecursiveOptions) format() (*generated.PathClientSet return nil, nil } -// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. -type SetHTTPHeadersOptions struct { - AccessConditions *AccessConditions -} +// ================================= path imports ================================== -func (o *SetHTTPHeadersOptions) format() *blob.SetHTTPHeadersOptions { - if o == nil { - return nil - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - return &blob.SetHTTPHeadersOptions{ - AccessConditions: accessConditions, - } -} - -// HTTPHeaders contains the HTTP headers for path operations. -type HTTPHeaders struct { - // Optional. Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. - CacheControl *string - // Optional. Sets the path's Content-Disposition header. - ContentDisposition *string - // Optional. Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read - // request. - ContentEncoding *string - // Optional. Set the path's content language. If specified, this property is stored with the path and returned with a read - // request. - ContentLanguage *string - // Specify the transactional md5 for the body, to be validated by the service. - ContentMD5 []byte - // Optional. Sets the path's content type. If specified, this property is stored with the path and returned with a read request. - ContentType *string -} - -func (o *HTTPHeaders) formatBlobHTTPHeaders() (*blob.HTTPHeaders, error) { - if o == nil { - return nil, nil - } - opts := blob.HTTPHeaders{ - BlobCacheControl: o.CacheControl, - BlobContentDisposition: o.ContentDisposition, - BlobContentEncoding: o.ContentEncoding, - BlobContentLanguage: o.ContentLanguage, - BlobContentMD5: o.ContentMD5, - BlobContentType: o.ContentType, - } - return &opts, nil -} - -func (o *HTTPHeaders) formatPathHTTPHeaders() (*generated.PathHTTPHeaders, error) { - // TODO: will be used for file related ops, like append - if o == nil { - return nil, nil - } - opts := generated.PathHTTPHeaders{ - CacheControl: o.CacheControl, - ContentDisposition: o.ContentDisposition, - ContentEncoding: o.ContentEncoding, - ContentLanguage: o.ContentLanguage, - ContentMD5: o.ContentMD5, - ContentType: o.ContentType, - TransactionalContentHash: o.ContentMD5, - } - return &opts, nil -} +// GetPropertiesOptions contains the optional parameters for the Client.GetProperties method +type GetPropertiesOptions = path.GetPropertiesOptions -// SetMetadataOptions provides set of configurations for Set Metadata on path operation -type SetMetadataOptions struct { - AccessConditions *AccessConditions - CPKInfo *CPKInfo - CPKScopeInfo *CPKScopeInfo -} +// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint +type SetAccessControlOptions = path.SetAccessControlOptions -func (o *SetMetadataOptions) format() *blob.SetMetadataOptions { - if o == nil { - return nil - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - return &blob.SetMetadataOptions{ - AccessConditions: accessConditions, - CPKInfo: &blob.CPKInfo{ - EncryptionKey: o.CPKInfo.EncryptionKey, - EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, - EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, - }, - CPKScopeInfo: &blob.CPKScopeInfo{ - EncryptionScope: o.CPKScopeInfo.EncryptionScope, - }, - } -} +// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. +type GetAccessControlOptions = path.GetAccessControlOptions // CPKInfo contains a group of parameters for the PathClient.Download method. -type CPKInfo struct { - EncryptionAlgorithm *EncryptionAlgorithmType - EncryptionKey *string - EncryptionKeySHA256 *string -} +type CPKInfo = path.CPKInfo -// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. -type CPKScopeInfo struct { - EncryptionScope *string -} +// GetSASURLOptions contains the optional parameters for the Client.GetSASURL method. +type GetSASURLOptions = path.GetSASURLOptions -// UndeletePathOptions contains the optional parameters for the Filesystem.UndeletePath operation. -type UndeletePathOptions struct { - // placeholder -} +// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. +type SetHTTPHeadersOptions = path.SetHTTPHeadersOptions -func (o *UndeletePathOptions) format() *UndeletePathOptions { - if o == nil { - return nil - } - return &UndeletePathOptions{} -} +// HTTPHeaders contains the HTTP headers for path operations. +type HTTPHeaders = path.HTTPHeaders -// SourceModifiedAccessConditions identifies the source path access conditions. -type SourceModifiedAccessConditions = generated.SourceModifiedAccessConditions +// SetMetadataOptions provides set of configurations for Set Metadata on path operation +type SetMetadataOptions = path.SetMetadataOptions // SharedKeyCredential contains an account's name and its primary or secondary key. -type SharedKeyCredential = exported.SharedKeyCredential +type SharedKeyCredential = path.SharedKeyCredential // AccessConditions identifies blob-specific access conditions which you optionally set. -type AccessConditions = exported.AccessConditions +type AccessConditions = path.AccessConditions + +// SourceAccessConditions identifies blob-specific access conditions which you optionally set. +type SourceAccessConditions = path.SourceAccessConditions // LeaseAccessConditions contains optional parameters to access leased entity. -type LeaseAccessConditions = exported.LeaseAccessConditions +type LeaseAccessConditions = path.LeaseAccessConditions // ModifiedAccessConditions contains a group of parameters for specifying access conditions. -type ModifiedAccessConditions = exported.ModifiedAccessConditions +type ModifiedAccessConditions = path.ModifiedAccessConditions + +// SourceModifiedAccessConditions contains a group of parameters for specifying access conditions. +type SourceModifiedAccessConditions = path.SourceModifiedAccessConditions + +// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. +type CPKScopeInfo path.CPKScopeInfo diff --git a/sdk/storage/azdatalake/directory/responses.go b/sdk/storage/azdatalake/directory/responses.go index 6a4f34714df9..e87136e9b0a6 100644 --- a/sdk/storage/azdatalake/directory/responses.go +++ b/sdk/storage/azdatalake/directory/responses.go @@ -7,8 +7,8 @@ package directory import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) // CreateResponse contains the response fields for the Create operation. @@ -20,9 +20,6 @@ type DeleteResponse = generated.PathClientDeleteResponse // RenameResponse contains the response fields for the Create operation. type RenameResponse = generated.PathClientCreateResponse -// SetAccessControlResponse contains the response fields for the SetAccessControl operation. -type SetAccessControlResponse = generated.PathClientSetAccessControlResponse - // SetAccessControlRecursiveResponse contains the response fields for the SetAccessControlRecursive operation. type SetAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse @@ -32,17 +29,19 @@ type UpdateAccessControlRecursiveResponse = generated.PathClientSetAccessControl // RemoveAccessControlRecursiveResponse contains the response fields for the RemoveAccessControlRecursive operation. type RemoveAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse +// ========================================== path imports =========================================================== + +// SetAccessControlResponse contains the response fields for the SetAccessControl operation. +type SetAccessControlResponse = path.SetAccessControlResponse + +// SetHTTPHeadersResponse contains the response from method Client.SetHTTPHeaders. +type SetHTTPHeadersResponse = path.SetHTTPHeadersResponse + // GetAccessControlResponse contains the response fields for the GetAccessControl operation. -type GetAccessControlResponse = generated.PathClientGetPropertiesResponse +type GetAccessControlResponse = path.GetAccessControlResponse // GetPropertiesResponse contains the response fields for the GetProperties operation. -type GetPropertiesResponse = generated.PathClientGetPropertiesResponse +type GetPropertiesResponse = path.GetPropertiesResponse // SetMetadataResponse contains the response fields for the SetMetadata operation. -type SetMetadataResponse = blob.SetMetadataResponse - -// SetHTTPHeadersResponse contains the response fields for the SetHTTPHeaders operation. -type SetHTTPHeadersResponse = blob.SetHTTPHeadersResponse - -// UndeletePathResponse contains the response from method FilesystemClient.UndeletePath. -type UndeletePathResponse = generated.PathClientUndeleteResponse +type SetMetadataResponse = path.SetMetadataResponse diff --git a/sdk/storage/azdatalake/file/client.go b/sdk/storage/azdatalake/file/client.go index 9d3e9da012ff..5909aa475aea 100644 --- a/sdk/storage/azdatalake/file/client.go +++ b/sdk/storage/azdatalake/file/client.go @@ -16,8 +16,12 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" + "net/http" + "net/url" + "strings" "time" ) @@ -52,7 +56,7 @@ func NewClient(fileURL string, cred azcore.TokenCredential, options *ClientOptio ClientOptions: options.ClientOptions, } blobClient, _ := blockblob.NewClient(blobURL, cred, &blobClientOpts) - fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, &cred, (*base.ClientOptions)(conOptions)) return (*Client)(fileClient), nil } @@ -80,7 +84,7 @@ func NewClientWithNoCredential(fileURL string, options *ClientOptions) (*Client, ClientOptions: options.ClientOptions, } blobClient, _ := blockblob.NewClientWithNoCredential(blobURL, &blobClientOpts) - fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, nil, (*base.ClientOptions)(conOptions)) return (*Client)(fileClient), nil } @@ -115,7 +119,7 @@ func NewClientWithSharedKeyCredential(fileURL string, cred *SharedKeyCredential, return nil, err } blobClient, _ := blockblob.NewClientWithSharedKeyCredential(blobURL, blobSharedKey, &blobClientOpts) - fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, cred, (*base.ClientOptions)(conOptions)) + fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, cred, nil, (*base.ClientOptions)(conOptions)) return (*Client)(fileClient), nil } @@ -160,6 +164,10 @@ func (f *Client) sharedKey() *exported.SharedKeyCredential { return base.SharedKeyComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(f)) } +func (f *Client) identityCredential() *azcore.TokenCredential { + return base.IdentityCredentialComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(f)) +} + func (f *Client) getClientOptions() *base.ClientOptions { return base.GetCompositeClientOptions((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(f)) } @@ -177,50 +185,73 @@ func (f *Client) BlobURL() string { // Create creates a new file (dfs1). func (f *Client) Create(ctx context.Context, options *CreateOptions) (CreateResponse, error) { lac, mac, httpHeaders, createOpts, cpkOpts := options.format() - return f.generatedFileClientWithDFS().Create(ctx, createOpts, httpHeaders, lac, mac, nil, cpkOpts) + resp, err := f.generatedFileClientWithDFS().Create(ctx, createOpts, httpHeaders, lac, mac, nil, cpkOpts) + err = exported.ConvertToDFSError(err) + return resp, err } // Delete deletes a file (dfs1). func (f *Client) Delete(ctx context.Context, options *DeleteOptions) (DeleteResponse, error) { lac, mac, deleteOpts := options.format() - return f.generatedFileClientWithDFS().Delete(ctx, deleteOpts, lac, mac) + resp, err := f.generatedFileClientWithDFS().Delete(ctx, deleteOpts, lac, mac) + err = exported.ConvertToDFSError(err) + return resp, err } // GetProperties gets the properties of a file (blob3) func (f *Client) GetProperties(ctx context.Context, options *GetPropertiesOptions) (GetPropertiesResponse, error) { - opts := options.format() - // TODO: format response + add acls, owner, group, permissions to it - return f.blobClient().GetProperties(ctx, opts) + opts := path.FormatGetPropertiesOptions(options) + var respFromCtx *http.Response + ctxWithResp := runtime.WithCaptureResponse(ctx, &respFromCtx) + resp, err := f.blobClient().GetProperties(ctxWithResp, opts) + newResp := path.FormatGetPropertiesResponse(&resp, respFromCtx) + err = exported.ConvertToDFSError(err) + return newResp, err } -// TODO: implement below -//// Rename renames a file (dfs1). TODO: look into returning a new client possibly or changing the url -//func (f *Client) Rename(ctx context.Context, newName string, options *RenameOptions) (RenameResponse, error) { -// path, err := url.Parse(f.DFSURL()) -// if err != nil { -// return RenameResponse{}, err -// } -// lac, mac, smac, createOpts := options.format(path.Path) -// fileURL := runtime.JoinPaths(f.generatedFileClientWithDFS().Endpoint(), newName) -// // TODO: remove new azcore.Client creation after the API for shallow copying with new client name is implemented -// clOpts := f.getClientOptions() -// azClient, err := azcore.NewClient(shared.FileClient, exported.ModuleVersion, *(base.GetPipelineOptions(clOpts)), &(clOpts.ClientOptions)) -// if err != nil { -// if log.Should(exported.EventError) { -// log.Writef(exported.EventError, err.Error()) -// } -// return RenameResponse{}, err -// } -// blobURL, fileURL := shared.GetURLs(fileURL) -// tempFileClient := (*Client)(base.NewPathClient(fileURL, blobURL, nil, azClient, f.sharedKey(), clOpts)) -// // this tempClient does not have a blobClient -// return tempFileClient.generatedFileClientWithDFS().Create(ctx, createOpts, nil, lac, mac, smac, nil) -//} +func (f *Client) renamePathInURL(newName string) (string, string, string) { + endpoint := f.DFSURL() + separator := "/" + // Find the index of the last occurrence of the separator + lastIndex := strings.LastIndex(endpoint, separator) + // Split the string based on the last occurrence of the separator + firstPart := endpoint[:lastIndex] // From the beginning of the string to the last occurrence of the separator + newPathURL, newBlobURL := shared.GetURLs(runtime.JoinPaths(firstPart, newName)) + parsedNewURL, _ := url.Parse(f.DFSURL()) + return parsedNewURL.Path, newPathURL, newBlobURL +} + +// Rename renames a file (dfs1) +func (f *Client) Rename(ctx context.Context, newName string, options *RenameOptions) (RenameResponse, error) { + newPathWithoutURL, newBlobURL, newPathURL := f.renamePathInURL(newName) + lac, mac, smac, createOpts := options.format(newPathWithoutURL) + var newBlobClient *blockblob.Client + var err error + if f.identityCredential() != nil { + newBlobClient, err = blockblob.NewClient(newBlobURL, *f.identityCredential(), nil) + } else if f.sharedKey() != nil { + blobSharedKey, _ := f.sharedKey().ConvertToBlobSharedKey() + newBlobClient, err = blockblob.NewClientWithSharedKeyCredential(newBlobURL, blobSharedKey, nil) + } else { + newBlobClient, err = blockblob.NewClientWithNoCredential(newBlobURL, nil) + } + if err != nil { + return RenameResponse{}, err + } + newFileClient := (*Client)(base.NewPathClient(newPathURL, newBlobURL, newBlobClient, f.generatedFileClientWithDFS().InternalClient().WithClientName(shared.FileClient), f.sharedKey(), f.identityCredential(), f.getClientOptions())) + resp, err := newFileClient.generatedFileClientWithDFS().Create(ctx, createOpts, nil, lac, mac, smac, nil) + return RenameResponse{ + Response: resp, + NewFileClient: newFileClient, + }, exported.ConvertToDFSError(err) +} // SetExpiry operation sets an expiry time on an existing file (blob2). func (f *Client) SetExpiry(ctx context.Context, expiryType SetExpiryType, o *SetExpiryOptions) (SetExpiryResponse, error) { expMode, opts := expiryType.Format(o) - return f.generatedFileClientWithBlob().SetExpiry(ctx, expMode, opts) + resp, err := f.generatedFileClientWithBlob().SetExpiry(ctx, expMode, opts) + err = exported.ConvertToDFSError(err) + return resp, err } //// Upload uploads data to a file. @@ -245,43 +276,54 @@ func (f *Client) SetExpiry(ctx context.Context, expiryType SetExpiryType, o *Set // SetAccessControl sets the owner, owning group, and permissions for a file or directory (dfs1). func (f *Client) SetAccessControl(ctx context.Context, options *SetAccessControlOptions) (SetAccessControlResponse, error) { - opts, lac, mac, err := options.format() + opts, lac, mac, err := path.FormatSetAccessControlOptions(options) if err != nil { return SetAccessControlResponse{}, err } - return f.generatedFileClientWithDFS().SetAccessControl(ctx, opts, lac, mac) + resp, err := f.generatedFileClientWithDFS().SetAccessControl(ctx, opts, lac, mac) + err = exported.ConvertToDFSError(err) + return resp, err } // UpdateAccessControl updates the owner, owning group, and permissions for a file or directory (dfs1). func (f *Client) UpdateAccessControl(ctx context.Context, ACL string, options *UpdateAccessControlOptions) (UpdateAccessControlResponse, error) { opts, mode := options.format(ACL) - return f.generatedFileClientWithDFS().SetAccessControlRecursive(ctx, mode, opts) + resp, err := f.generatedFileClientWithDFS().SetAccessControlRecursive(ctx, mode, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1). func (f *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) { - opts, lac, mac := options.format() - return f.generatedFileClientWithDFS().GetProperties(ctx, opts, lac, mac) + opts, lac, mac := path.FormatGetAccessControlOptions(options) + resp, err := f.generatedFileClientWithDFS().GetProperties(ctx, opts, lac, mac) + err = exported.ConvertToDFSError(err) + return resp, err } // RemoveAccessControl removes the owner, owning group, and permissions for a file or directory (dfs1). func (f *Client) RemoveAccessControl(ctx context.Context, ACL string, options *RemoveAccessControlOptions) (RemoveAccessControlResponse, error) { opts, mode := options.format(ACL) - return f.generatedFileClientWithDFS().SetAccessControlRecursive(ctx, mode, opts) + resp, err := f.generatedFileClientWithDFS().SetAccessControlRecursive(ctx, mode, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // SetMetadata sets the metadata for a file or directory (blob3). func (f *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) { - opts, metadata := options.format() - return f.blobClient().SetMetadata(ctx, metadata, opts) + opts, metadata := path.FormatSetMetadataOptions(options) + resp, err := f.blobClient().SetMetadata(ctx, metadata, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // SetHTTPHeaders sets the HTTP headers for a file or directory (blob3). func (f *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, options *SetHTTPHeadersOptions) (SetHTTPHeadersResponse, error) { - opts, blobHTTPHeaders := options.format(httpHeaders) + opts, blobHTTPHeaders := path.FormatSetHTTPHeadersOptions(options, httpHeaders) resp, err := f.blobClient().SetHTTPHeaders(ctx, blobHTTPHeaders, opts) newResp := SetHTTPHeadersResponse{} - formatSetHTTPHeadersResponse(&newResp, &resp) + path.FormatSetHTTPHeadersResponse(&newResp, &resp) + err = exported.ConvertToDFSError(err) return newResp, err } @@ -293,11 +335,12 @@ func (f *Client) GetSASURL(permissions sas.FilePermissions, expiry time.Time, o } urlParts, err := sas.ParseURL(f.BlobURL()) + err = exported.ConvertToDFSError(err) if err != nil { return "", err } - st := o.format() + st := path.FormatGetSASURLOptions(o) qps, err := sas.DatalakeSignatureValues{ FilePath: urlParts.PathName, @@ -308,6 +351,7 @@ func (f *Client) GetSASURL(permissions sas.FilePermissions, expiry time.Time, o ExpiryTime: expiry.UTC(), }.SignWithSharedKey(f.sharedKey()) + err = exported.ConvertToDFSError(err) if err != nil { return "", err } diff --git a/sdk/storage/azdatalake/file/client_test.go b/sdk/storage/azdatalake/file/client_test.go index cd786f2a08e3..f9b2ccdacf71 100644 --- a/sdk/storage/azdatalake/file/client_test.go +++ b/sdk/storage/azdatalake/file/client_test.go @@ -64,7 +64,7 @@ func validateFileDeleted(_require *require.Assertions, fileClient *file.Client) _, err := fileClient.GetAccessControl(context.Background(), nil) _require.NotNil(err) - testcommon.ValidateErrorCode(_require, err, datalakeerror.BlobNotFound) + testcommon.ValidateErrorCode(_require, err, datalakeerror.PathNotFound) } func (s *RecordedTestSuite) TestCreateFileAndDelete() { @@ -518,7 +518,7 @@ func (s *RecordedTestSuite) TestCreateFileWithExpiryRelativeToNow() { time.Sleep(time.Second * 10) _, err = fClient.GetProperties(context.Background(), nil) - testcommon.ValidateErrorCode(_require, err, datalakeerror.BlobNotFound) + testcommon.ValidateErrorCode(_require, err, datalakeerror.PathNotFound) } func (s *RecordedTestSuite) TestCreateFileWithNeverExpire() { @@ -593,10 +593,12 @@ func (s *RecordedTestSuite) TestCreateFileWithPermissions() { fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) _require.NoError(err) perms := "0777" + umask := "0000" defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) createFileOpts := &file.CreateOptions{ Permissions: &perms, + Umask: &umask, } _, err = fsClient.Create(context.Background(), nil) @@ -610,7 +612,10 @@ func (s *RecordedTestSuite) TestCreateFileWithPermissions() { _require.Nil(err) _require.NotNil(resp) - //TODO: GetProperties() when you figured out how to add permissions into response + resp2, err := fClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp2) + _require.Equal("rwxrwxrwx", *resp2.Permissions) } func (s *RecordedTestSuite) TestCreateFileWithOwnerGroupACLUmask() { @@ -644,7 +649,6 @@ func (s *RecordedTestSuite) TestCreateFileWithOwnerGroupACLUmask() { _require.Nil(err) _require.NotNil(resp) - //TODO: GetProperties() when you figured out how to add o,g, ACL into response } func (s *RecordedTestSuite) TestDeleteFileWithNilAccessConditions() { @@ -2033,280 +2037,268 @@ func (s *RecordedTestSuite) TestSetHTTPHeadersIfETagMatchFalse() { testcommon.ValidateErrorCode(_require, err, datalakeerror.ConditionNotMet) } -//func (s *RecordedTestSuite) TestRenameNoOptions() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// resp, err = fClient.Rename(context.Background(), "newName", nil) -// _require.Nil(err) -// _require.NotNil(resp) -//} -// -//func (s *RecordedTestSuite) TestRenameFileWithNilAccessConditions() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// renameFileOpts := &file.RenameOptions{ -// AccessConditions: nil, -// } -// -// resp, err = fClient.Rename(context.Background(), "new"+fileName, renameFileOpts) -// _require.Nil(err) -// _require.NotNil(resp) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfModifiedSinceTrue() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, -10) -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfModifiedSince: ¤tTime, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.Nil(err) -// _require.NotNil(resp) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfModifiedSinceFalse() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, 10) -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfModifiedSince: ¤tTime, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.NotNil(err) -// testcommon.ValidateErrorCode(_require, err, datalakeerror.ConditionNotMet) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfUnmodifiedSinceTrue() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, 10) -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfUnmodifiedSince: ¤tTime, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.Nil(err) -// _require.NotNil(resp) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfUnmodifiedSinceFalse() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, -10) -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfUnmodifiedSince: ¤tTime, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.NotNil(err) -// -// testcommon.ValidateErrorCode(_require, err, datalakeerror.ConditionNotMet) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfETagMatch() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// etag := resp.ETag -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfMatch: etag, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.Nil(err) -// _require.NotNil(resp) -//} -// -//func (s *RecordedTestSuite) TestRenameFileIfETagMatchFalse() { -// _require := require.New(s.T()) -// testName := s.T().Name() -// -// filesystemName := testcommon.GenerateFilesystemName(testName) -// fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) -// -// _, err = fsClient.Create(context.Background(), nil) -// _require.Nil(err) -// -// fileName := testcommon.GenerateFileName(testName) -// fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) -// _require.NoError(err) -// -// defer testcommon.DeleteFile(context.Background(), _require, fClient) -// -// resp, err := fClient.Create(context.Background(), nil) -// _require.Nil(err) -// _require.NotNil(resp) -// -// etag := resp.ETag -// -// createFileOpts := &file.CreateOptions{ -// AccessConditions: &file.AccessConditions{ -// ModifiedAccessConditions: &file.ModifiedAccessConditions{ -// IfNoneMatch: etag, -// }, -// }, -// } -// -// resp, err = fClient.Create(context.Background(), createFileOpts) -// _require.NotNil(err) -// -// testcommon.ValidateErrorCode(_require, err, datalakeerror.ConditionNotMet) -//} +func (s *RecordedTestSuite) TestRenameNoOptions() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + resp1, err := fClient.Rename(context.Background(), "newName", nil) + _require.Nil(err) + _require.NotNil(resp1) + _require.Contains(resp1.NewFileClient.DFSURL(), "newName") +} + +func (s *RecordedTestSuite) TestRenameFileWithNilAccessConditions() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + renameFileOpts := &file.RenameOptions{ + AccessConditions: nil, + } + + resp1, err := fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.Nil(err) + _require.NotNil(resp1) + _require.Contains(resp1.NewFileClient.DFSURL(), "newName") +} + +func (s *RecordedTestSuite) TestRenameFileIfModifiedSinceTrue() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, -10) + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfModifiedSince: ¤tTime, + }, + }, + } + resp1, err := fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.Nil(err) + _require.NotNil(resp1) + _require.Contains(resp1.NewFileClient.DFSURL(), "newName") +} + +func (s *RecordedTestSuite) TestRenameFileIfModifiedSinceFalse() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, 10) + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfModifiedSince: ¤tTime, + }, + }, + } + + _, err = fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.NotNil(err) + testcommon.ValidateErrorCode(_require, err, datalakeerror.SourceConditionNotMet) +} + +func (s *RecordedTestSuite) TestRenameFileIfUnmodifiedSinceTrue() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, 10) + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfUnmodifiedSince: ¤tTime, + }, + }, + } + + resp1, err := fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.NotNil(resp1) + _require.Contains(resp1.NewFileClient.DFSURL(), "newName") +} + +func (s *RecordedTestSuite) TestRenameFileIfUnmodifiedSinceFalse() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + currentTime := testcommon.GetRelativeTimeFromAnchor(resp.Date, -10) + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfUnmodifiedSince: ¤tTime, + }, + }, + } + + _, err = fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.NotNil(err) + testcommon.ValidateErrorCode(_require, err, datalakeerror.SourceConditionNotMet) +} + +func (s *RecordedTestSuite) TestRenameFileIfETagMatch() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + etag := resp.ETag + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfMatch: etag, + }, + }, + } + + resp1, err := fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.NotNil(resp1) + _require.Contains(resp1.NewFileClient.DFSURL(), "newName") +} + +func (s *RecordedTestSuite) TestRenameFileIfETagMatchFalse() { + _require := require.New(s.T()) + testName := s.T().Name() + + filesystemName := testcommon.GenerateFilesystemName(testName) + fsClient, err := testcommon.GetFilesystemClient(filesystemName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + fileName := testcommon.GenerateFileName(testName) + fClient, err := testcommon.GetFileClient(filesystemName, fileName, s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + resp, err := fClient.Create(context.Background(), nil) + _require.Nil(err) + _require.NotNil(resp) + + etag := resp.ETag + + renameFileOpts := &file.RenameOptions{ + SourceAccessConditions: &file.SourceAccessConditions{ + SourceModifiedAccessConditions: &file.SourceModifiedAccessConditions{ + SourceIfNoneMatch: etag, + }, + }, + } + + _, err = fClient.Rename(context.Background(), "newName", renameFileOpts) + _require.NotNil(err) + testcommon.ValidateErrorCode(_require, err, datalakeerror.SourceConditionNotMet) +} diff --git a/sdk/storage/azdatalake/file/constants.go b/sdk/storage/azdatalake/file/constants.go index c536d01673cf..2345c88d547b 100644 --- a/sdk/storage/azdatalake/file/constants.go +++ b/sdk/storage/azdatalake/file/constants.go @@ -7,56 +7,32 @@ package file import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) -type EncryptionAlgorithmType = blob.EncryptionAlgorithmType +type EncryptionAlgorithmType = path.EncryptionAlgorithmType const ( - EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone - EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 + EncryptionAlgorithmTypeNone EncryptionAlgorithmType = path.EncryptionAlgorithmTypeNone + EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = path.EncryptionAlgorithmTypeAES256 ) -// responses models: +// response models: -type ImmutabilityPolicyMode = blob.ImmutabilityPolicyMode +type ImmutabilityPolicyMode = path.ImmutabilityPolicyMode const ( - ImmutabilityPolicyModeMutable ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeMutable - ImmutabilityPolicyModeUnlocked ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeUnlocked - ImmutabilityPolicyModeLocked ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeLocked + ImmutabilityPolicyModeMutable ImmutabilityPolicyMode = path.ImmutabilityPolicyModeMutable + ImmutabilityPolicyModeUnlocked ImmutabilityPolicyMode = path.ImmutabilityPolicyModeUnlocked + ImmutabilityPolicyModeLocked ImmutabilityPolicyMode = path.ImmutabilityPolicyModeLocked ) -type CopyStatusType = blob.CopyStatusType +// CopyStatusType defines values for CopyStatusType +type CopyStatusType = path.CopyStatusType const ( - CopyStatusTypePending CopyStatusType = blob.CopyStatusTypePending - CopyStatusTypeSuccess CopyStatusType = blob.CopyStatusTypeSuccess - CopyStatusTypeAborted CopyStatusType = blob.CopyStatusTypeAborted - CopyStatusTypeFailed CopyStatusType = blob.CopyStatusTypeFailed -) - -type LeaseDurationType = lease.DurationType - -const ( - LeaseDurationTypeInfinite LeaseDurationType = lease.DurationTypeInfinite - LeaseDurationTypeFixed LeaseDurationType = lease.DurationTypeFixed -) - -type LeaseStateType = lease.StateType - -const ( - LeaseStateTypeAvailable LeaseStateType = lease.StateTypeAvailable - LeaseStateTypeLeased LeaseStateType = lease.StateTypeLeased - LeaseStateTypeExpired LeaseStateType = lease.StateTypeExpired - LeaseStateTypeBreaking LeaseStateType = lease.StateTypeBreaking - LeaseStateTypeBroken LeaseStateType = lease.StateTypeBroken -) - -type LeaseStatusType = lease.StatusType - -const ( - LeaseStatusTypeLocked LeaseStatusType = lease.StatusTypeLocked - LeaseStatusTypeUnlocked LeaseStatusType = lease.StatusTypeUnlocked + CopyStatusTypePending CopyStatusType = path.CopyStatusTypePending + CopyStatusTypeSuccess CopyStatusType = path.CopyStatusTypeSuccess + CopyStatusTypeAborted CopyStatusType = path.CopyStatusTypeAborted + CopyStatusTypeFailed CopyStatusType = path.CopyStatusTypeFailed ) diff --git a/sdk/storage/azdatalake/file/models.go b/sdk/storage/azdatalake/file/models.go index af88f45bd326..a4f8b994ff1d 100644 --- a/sdk/storage/azdatalake/file/models.go +++ b/sdk/storage/azdatalake/file/models.go @@ -7,12 +7,10 @@ package file import ( - "errors" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" "net/http" "strconv" "time" @@ -77,7 +75,7 @@ func (o *CreateOptions) format() (*generated.LeaseAccessConditions, *generated.M var cpkOpts *generated.CPKInfo if o.HTTPHeaders != nil { - httpHeaders = o.HTTPHeaders.formatPathHTTPHeaders() + httpHeaders = path.FormatPathHTTPHeaders(o.HTTPHeaders) } if o.CPKInfo != nil { cpkOpts = &generated.CPKInfo{ @@ -127,6 +125,9 @@ func (o *RenameOptions) format(path string) (*generated.LeaseAccessConditions, * } leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) if o.SourceAccessConditions != nil { + if o.SourceAccessConditions.SourceLeaseAccessConditions != nil { + createOpts.SourceLeaseID = o.SourceAccessConditions.SourceLeaseAccessConditions.LeaseID + } if o.SourceAccessConditions.SourceModifiedAccessConditions != nil { sourceModifiedAccessConditions := &generated.SourceModifiedAccessConditions{ SourceIfMatch: o.SourceAccessConditions.SourceModifiedAccessConditions.SourceIfMatch, @@ -134,90 +135,14 @@ func (o *RenameOptions) format(path string) (*generated.LeaseAccessConditions, * SourceIfNoneMatch: o.SourceAccessConditions.SourceModifiedAccessConditions.SourceIfNoneMatch, SourceIfUnmodifiedSince: o.SourceAccessConditions.SourceModifiedAccessConditions.SourceIfUnmodifiedSince, } - createOpts.SourceLeaseID = o.SourceAccessConditions.SourceLeaseAccessConditions.LeaseID return leaseAccessConditions, modifiedAccessConditions, sourceModifiedAccessConditions, createOpts } } return leaseAccessConditions, modifiedAccessConditions, nil, createOpts } -// GetPropertiesOptions contains the optional parameters for the Client.GetProperties method -type GetPropertiesOptions struct { - AccessConditions *AccessConditions - CPKInfo *CPKInfo -} - -func (o *GetPropertiesOptions) format() *blob.GetPropertiesOptions { - if o == nil { - return nil - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - return &blob.GetPropertiesOptions{ - AccessConditions: accessConditions, - CPKInfo: &blob.CPKInfo{ - EncryptionKey: o.CPKInfo.EncryptionKey, - EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, - EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, - }, - } -} - // ===================================== PATH IMPORTS =========================================== -// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint -type SetAccessControlOptions struct { - // Owner is the owner of the path. - Owner *string - // Group is the owning group of the path. - Group *string - // ACL is the access control list for the path. - ACL *string - // Permissions is the octal representation of the permissions for user, group and mask. - Permissions *string - // AccessConditions contains parameters for accessing the path. - AccessConditions *AccessConditions -} - -func (o *SetAccessControlOptions) format() (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { - if o == nil { - return nil, nil, nil, datalakeerror.MissingParameters - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) - if o.Owner == nil && o.Group == nil && o.ACL == nil && o.Permissions == nil { - return nil, nil, nil, errors.New("at least one parameter should be set for SetAccessControl API") - } - return &generated.PathClientSetAccessControlOptions{ - Owner: o.Owner, - Group: o.Group, - ACL: o.ACL, - Permissions: o.Permissions, - }, leaseAccessConditions, modifiedAccessConditions, nil -} - -// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. -type GetAccessControlOptions struct { - // UPN is the user principal name. - UPN *bool - // AccessConditions contains parameters for accessing the path. - AccessConditions *AccessConditions -} - -func (o *GetAccessControlOptions) format() (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions) { - action := generated.PathGetPropertiesActionGetAccessControl - if o == nil { - return &generated.PathClientGetPropertiesOptions{ - Action: &action, - }, nil, nil - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) - return &generated.PathClientGetPropertiesOptions{ - Upn: o.UPN, - Action: &action, - }, leaseAccessConditions, modifiedAccessConditions -} - // UpdateAccessControlOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. type UpdateAccessControlOptions struct { //placeholder @@ -242,133 +167,6 @@ func (o *RemoveAccessControlOptions) format(ACL string) (*generated.PathClientSe }, mode } -// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. -type SetHTTPHeadersOptions struct { - AccessConditions *AccessConditions -} - -func (o *SetHTTPHeadersOptions) format(httpHeaders HTTPHeaders) (*blob.SetHTTPHeadersOptions, blob.HTTPHeaders) { - httpHeaderOpts := blob.HTTPHeaders{ - BlobCacheControl: httpHeaders.CacheControl, - BlobContentDisposition: httpHeaders.ContentDisposition, - BlobContentEncoding: httpHeaders.ContentEncoding, - BlobContentLanguage: httpHeaders.ContentLanguage, - BlobContentMD5: httpHeaders.ContentMD5, - BlobContentType: httpHeaders.ContentType, - } - if o == nil { - return nil, httpHeaderOpts - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - return &blob.SetHTTPHeadersOptions{ - AccessConditions: accessConditions, - }, httpHeaderOpts -} - -// HTTPHeaders contains the HTTP headers for path operations. -type HTTPHeaders struct { - // Optional. Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. - CacheControl *string - // Optional. Sets the path's Content-Disposition header. - ContentDisposition *string - // Optional. Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read - // request. - ContentEncoding *string - // Optional. Set the path's content language. If specified, this property is stored with the path and returned with a read - // request. - ContentLanguage *string - // Specify the transactional md5 for the body, to be validated by the service. - ContentMD5 []byte - // Optional. Sets the path's content type. If specified, this property is stored with the path and returned with a read request. - ContentType *string -} - -// -//func (o HTTPHeaders) formatBlobHTTPHeaders() blob.HTTPHeaders { -// -// opts := blob.HTTPHeaders{ -// BlobCacheControl: o.CacheControl, -// BlobContentDisposition: o.ContentDisposition, -// BlobContentEncoding: o.ContentEncoding, -// BlobContentLanguage: o.ContentLanguage, -// BlobContentMD5: o.ContentMD5, -// BlobContentType: o.ContentType, -// } -// return opts -//} - -func (o *HTTPHeaders) formatPathHTTPHeaders() *generated.PathHTTPHeaders { - // TODO: will be used for file related ops, like append - if o == nil { - return nil - } - opts := generated.PathHTTPHeaders{ - CacheControl: o.CacheControl, - ContentDisposition: o.ContentDisposition, - ContentEncoding: o.ContentEncoding, - ContentLanguage: o.ContentLanguage, - ContentMD5: o.ContentMD5, - ContentType: o.ContentType, - TransactionalContentHash: o.ContentMD5, - } - return &opts -} - -// SetMetadataOptions provides set of configurations for Set Metadata on path operation -type SetMetadataOptions struct { - Metadata map[string]*string - AccessConditions *AccessConditions - CPKInfo *CPKInfo - CPKScopeInfo *CPKScopeInfo -} - -func (o *SetMetadataOptions) format() (*blob.SetMetadataOptions, map[string]*string) { - if o == nil { - return nil, nil - } - accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) - opts := &blob.SetMetadataOptions{ - AccessConditions: accessConditions, - } - if o.CPKInfo != nil { - opts.CPKInfo = &blob.CPKInfo{ - EncryptionKey: o.CPKInfo.EncryptionKey, - EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, - EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, - } - } - if o.CPKScopeInfo != nil { - opts.CPKScopeInfo = (*blob.CPKScopeInfo)(o.CPKScopeInfo) - } - return opts, o.Metadata -} - -// CPKInfo contains a group of parameters for the PathClient.Download method. -type CPKInfo struct { - EncryptionAlgorithm *EncryptionAlgorithmType - EncryptionKey *string - EncryptionKeySHA256 *string -} - -// GetSASURLOptions contains the optional parameters for the Client.GetSASURL method. -type GetSASURLOptions struct { - StartTime *time.Time -} - -func (o *GetSASURLOptions) format() time.Time { - if o == nil { - return time.Time{} - } - - var st time.Time - if o.StartTime != nil { - st = o.StartTime.UTC() - } else { - st = time.Time{} - } - return st -} - // CreationExpiryType defines values for Create() ExpiryType type CreationExpiryType interface { Format() (generated.ExpiryOptions, *string) @@ -411,12 +209,6 @@ type ACLFailedEntry = generated.ACLFailedEntry // SetAccessControlRecursiveResponse contains part of the response data returned by the []OP_AccessControl operations. type SetAccessControlRecursiveResponse = generated.SetAccessControlRecursiveResponse -// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. -type CPKScopeInfo blob.CPKScopeInfo - -// SharedKeyCredential contains an account's name and its primary or secondary key. -type SharedKeyCredential = exported.SharedKeyCredential - // SetExpiryType defines values for ExpiryType. type SetExpiryType = exported.SetExpiryType @@ -435,17 +227,49 @@ type SetExpiryTypeNever = exported.SetExpiryTypeNever // SetExpiryOptions contains the optional parameters for the Client.SetExpiry method. type SetExpiryOptions = exported.SetExpiryOptions +// ================================= path imports ================================== + +// GetPropertiesOptions contains the optional parameters for the Client.GetProperties method +type GetPropertiesOptions = path.GetPropertiesOptions + +// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint +type SetAccessControlOptions = path.SetAccessControlOptions + +// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. +type GetAccessControlOptions = path.GetAccessControlOptions + +// CPKInfo contains a group of parameters for the PathClient.Download method. +type CPKInfo = path.CPKInfo + +// GetSASURLOptions contains the optional parameters for the Client.GetSASURL method. +type GetSASURLOptions = path.GetSASURLOptions + +// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. +type SetHTTPHeadersOptions = path.SetHTTPHeadersOptions + +// HTTPHeaders contains the HTTP headers for path operations. +type HTTPHeaders = path.HTTPHeaders + +// SetMetadataOptions provides set of configurations for Set Metadata on path operation +type SetMetadataOptions = path.SetMetadataOptions + +// SharedKeyCredential contains an account's name and its primary or secondary key. +type SharedKeyCredential = path.SharedKeyCredential + // AccessConditions identifies blob-specific access conditions which you optionally set. -type AccessConditions = exported.AccessConditions +type AccessConditions = path.AccessConditions // SourceAccessConditions identifies blob-specific access conditions which you optionally set. -type SourceAccessConditions = exported.SourceAccessConditions +type SourceAccessConditions = path.SourceAccessConditions // LeaseAccessConditions contains optional parameters to access leased entity. -type LeaseAccessConditions = exported.LeaseAccessConditions +type LeaseAccessConditions = path.LeaseAccessConditions // ModifiedAccessConditions contains a group of parameters for specifying access conditions. -type ModifiedAccessConditions = exported.ModifiedAccessConditions +type ModifiedAccessConditions = path.ModifiedAccessConditions // SourceModifiedAccessConditions contains a group of parameters for specifying access conditions. -type SourceModifiedAccessConditions = exported.SourceModifiedAccessConditions +type SourceModifiedAccessConditions = path.SourceModifiedAccessConditions + +// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. +type CPKScopeInfo path.CPKScopeInfo diff --git a/sdk/storage/azdatalake/file/responses.go b/sdk/storage/azdatalake/file/responses.go index 2e6a3cecee2e..3116edab7355 100644 --- a/sdk/storage/azdatalake/file/responses.go +++ b/sdk/storage/azdatalake/file/responses.go @@ -7,10 +7,8 @@ package file import ( - "github.com/Azure/azure-sdk-for-go/sdk/azcore" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "time" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) // SetExpiryResponse contains the response fields for the SetExpiry operation. @@ -22,58 +20,31 @@ type CreateResponse = generated.PathClientCreateResponse // DeleteResponse contains the response fields for the Delete operation. type DeleteResponse = generated.PathClientDeleteResponse -// SetAccessControlResponse contains the response fields for the SetAccessControl operation. -type SetAccessControlResponse = generated.PathClientSetAccessControlResponse - // UpdateAccessControlResponse contains the response fields for the UpdateAccessControlRecursive operation. type UpdateAccessControlResponse = generated.PathClientSetAccessControlRecursiveResponse // RemoveAccessControlResponse contains the response fields for the RemoveAccessControlRecursive operation. type RemoveAccessControlResponse = generated.PathClientSetAccessControlRecursiveResponse -// GetAccessControlResponse contains the response fields for the GetAccessControl operation. -type GetAccessControlResponse = generated.PathClientGetPropertiesResponse - -// GetPropertiesResponse contains the response fields for the GetProperties operation. -type GetPropertiesResponse = blob.GetPropertiesResponse - -// SetMetadataResponse contains the response fields for the SetMetadata operation. -type SetMetadataResponse = blob.SetMetadataResponse - // RenameResponse contains the response fields for the Create operation. -type RenameResponse = generated.PathClientCreateResponse +type RenameResponse struct { + Response generated.PathClientCreateResponse + NewFileClient *Client +} -//// SetHTTPHeadersResponse contains the response fields for the SetHTTPHeaders operation. -//type SetHTTPHeadersResponse = blob.SetHTTPHeadersResponse +// ========================================== path imports =========================================================== -// we need to remove the blob sequence number from the response +// SetAccessControlResponse contains the response fields for the SetAccessControl operation. +type SetAccessControlResponse = path.SetAccessControlResponse // SetHTTPHeadersResponse contains the response from method Client.SetHTTPHeaders. -type SetHTTPHeadersResponse struct { - // ClientRequestID contains the information returned from the x-ms-client-request-id header response. - ClientRequestID *string - - // Date contains the information returned from the Date header response. - Date *time.Time - - // ETag contains the information returned from the ETag header response. - ETag *azcore.ETag +type SetHTTPHeadersResponse = path.SetHTTPHeadersResponse - // LastModified contains the information returned from the Last-Modified header response. - LastModified *time.Time - - // RequestID contains the information returned from the x-ms-request-id header response. - RequestID *string +// GetAccessControlResponse contains the response fields for the GetAccessControl operation. +type GetAccessControlResponse = path.GetAccessControlResponse - // Version contains the information returned from the x-ms-version header response. - Version *string -} +// GetPropertiesResponse contains the response fields for the GetProperties operation. +type GetPropertiesResponse = path.GetPropertiesResponse -func formatSetHTTPHeadersResponse(r *SetHTTPHeadersResponse, blobResp *blob.SetHTTPHeadersResponse) { - r.ClientRequestID = blobResp.ClientRequestID - r.Date = blobResp.Date - r.ETag = blobResp.ETag - r.LastModified = blobResp.LastModified - r.RequestID = blobResp.RequestID - r.Version = blobResp.Version -} +// SetMetadataResponse contains the response fields for the SetMetadata operation. +type SetMetadataResponse = path.SetMetadataResponse diff --git a/sdk/storage/azdatalake/filesystem/client.go b/sdk/storage/azdatalake/filesystem/client.go index 0f182909cb69..c0a8146de2ad 100644 --- a/sdk/storage/azdatalake/filesystem/client.go +++ b/sdk/storage/azdatalake/filesystem/client.go @@ -14,6 +14,8 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/directory" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/file" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" @@ -54,7 +56,7 @@ func NewClient(filesystemURL string, cred azcore.TokenCredential, options *Clien ClientOptions: options.ClientOptions, } blobContainerClient, _ := container.NewClient(containerURL, cred, &containerClientOpts) - fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, nil, (*base.ClientOptions)(conOptions)) + fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, nil, &cred, (*base.ClientOptions)(conOptions)) return (*Client)(fsClient), nil } @@ -81,7 +83,7 @@ func NewClientWithNoCredential(filesystemURL string, options *ClientOptions) (*C ClientOptions: options.ClientOptions, } blobContainerClient, _ := container.NewClientWithNoCredential(containerURL, &containerClientOpts) - fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, nil, (*base.ClientOptions)(conOptions)) + fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, nil, nil, (*base.ClientOptions)(conOptions)) return (*Client)(fsClient), nil } @@ -115,7 +117,7 @@ func NewClientWithSharedKeyCredential(filesystemURL string, cred *SharedKeyCrede return nil, err } blobContainerClient, _ := container.NewClientWithSharedKeyCredential(containerURL, blobSharedKey, &containerClientOpts) - fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, cred, (*base.ClientOptions)(conOptions)) + fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, cred, nil, (*base.ClientOptions)(conOptions)) return (*Client)(fsClient), nil } @@ -151,11 +153,19 @@ func (fs *Client) generatedFSClientWithBlob() *generated.FileSystemClient { return fsClientWithBlob } +func (fs *Client) getClientOptions() *base.ClientOptions { + return base.GetCompositeClientOptions((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) +} + func (fs *Client) containerClient() *container.Client { _, _, containerClient := base.InnerClients((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) return containerClient } +func (f *Client) identityCredential() *azcore.TokenCredential { + return base.IdentityCredentialComposite((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(f)) +} + func (fs *Client) sharedKey() *exported.SharedKeyCredential { return base.SharedKeyComposite((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) } @@ -170,16 +180,36 @@ func (fs *Client) BlobURL() string { return fs.generatedFSClientWithBlob().Endpoint() } +// NewDirectoryClient creates a new directory.Client object by concatenating directory path to the end of this Client's URL. +// The new directory.Client uses the same request policy pipeline as the Client. +func (fs *Client) NewDirectoryClient(directoryPath string) *directory.Client { + dirURL := runtime.JoinPaths(fs.generatedFSClientWithDFS().Endpoint(), directoryPath) + dirURL, blobURL := shared.GetURLs(dirURL) + return (*directory.Client)(base.NewPathClient(dirURL, blobURL, fs.containerClient().NewBlockBlobClient(directoryPath), fs.generatedFSClientWithDFS().InternalClient().WithClientName(shared.DirectoryClient), fs.sharedKey(), fs.identityCredential(), fs.getClientOptions())) +} + +// NewFileClient creates a new file.Client object by concatenating file path to the end of this Client's URL. +// The new file.Client uses the same request policy pipeline as the Client. +func (fs *Client) NewFileClient(filePath string) *file.Client { + fileURL := runtime.JoinPaths(fs.generatedFSClientWithDFS().Endpoint(), filePath) + fileURL, blobURL := shared.GetURLs(filePath) + return (*file.Client)(base.NewPathClient(fileURL, blobURL, fs.containerClient().NewBlockBlobClient(filePath), fs.generatedFSClientWithDFS().InternalClient().WithClientName(shared.FileClient), fs.sharedKey(), fs.identityCredential(), fs.getClientOptions())) +} + // Create creates a new filesystem under the specified account. (blob3). func (fs *Client) Create(ctx context.Context, options *CreateOptions) (CreateResponse, error) { opts := options.format() - return fs.containerClient().Create(ctx, opts) + resp, err := fs.containerClient().Create(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // Delete deletes the specified filesystem and any files or directories it contains. (blob3). func (fs *Client) Delete(ctx context.Context, options *DeleteOptions) (DeleteResponse, error) { opts := options.format() - return fs.containerClient().Delete(ctx, opts) + resp, err := fs.containerClient().Delete(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // GetProperties returns all user-defined metadata, standard HTTP properties, and system properties for the filesystem. (blob3). @@ -189,19 +219,24 @@ func (fs *Client) GetProperties(ctx context.Context, options *GetPropertiesOptio resp, err := fs.containerClient().GetProperties(ctx, opts) // TODO: find a cleaner way to not use lease from blob package formatFilesystemProperties(&newResp, &resp) + err = exported.ConvertToDFSError(err) return newResp, err } // SetMetadata sets one or more user-defined name-value pairs for the specified filesystem. (blob3). func (fs *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) { opts := options.format() - return fs.containerClient().SetMetadata(ctx, opts) + resp, err := fs.containerClient().SetMetadata(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // SetAccessPolicy sets the permissions for the specified filesystem or the files and directories under it. (blob3). func (fs *Client) SetAccessPolicy(ctx context.Context, options *SetAccessPolicyOptions) (SetAccessPolicyResponse, error) { opts := options.format() - return fs.containerClient().SetAccessPolicy(ctx, opts) + resp, err := fs.containerClient().SetAccessPolicy(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // GetAccessPolicy returns the permissions for the specified filesystem or the files and directories under it. (blob3). @@ -210,6 +245,7 @@ func (fs *Client) GetAccessPolicy(ctx context.Context, options *GetAccessPolicyO newResp := GetAccessPolicyResponse{} resp, err := fs.containerClient().GetAccessPolicy(ctx, opts) formatGetAccessPolicyResponse(&newResp, &resp) + err = exported.ConvertToDFSError(err) return newResp, err } @@ -229,14 +265,17 @@ func (fs *Client) NewListPathsPager(recursive bool, options *ListPathsOptions) * var err error if page == nil { req, err = fs.generatedFSClientWithDFS().ListPathsCreateRequest(ctx, recursive, &listOptions) + err = exported.ConvertToDFSError(err) } else { listOptions.Continuation = page.Continuation req, err = fs.generatedFSClientWithDFS().ListPathsCreateRequest(ctx, recursive, &listOptions) + err = exported.ConvertToDFSError(err) } if err != nil { return ListPathsSegmentResponse{}, err } resp, err := fs.generatedFSClientWithDFS().InternalClient().Pipeline().Do(req) + err = exported.ConvertToDFSError(err) if err != nil { return ListPathsSegmentResponse{}, err } @@ -261,14 +300,17 @@ func (fs *Client) NewListDeletedPathsPager(options *ListDeletedPathsOptions) *ru var err error if page == nil { req, err = fs.generatedFSClientWithDFS().ListBlobHierarchySegmentCreateRequest(ctx, &listOptions) + err = exported.ConvertToDFSError(err) } else { listOptions.Marker = page.NextMarker req, err = fs.generatedFSClientWithDFS().ListBlobHierarchySegmentCreateRequest(ctx, &listOptions) + err = exported.ConvertToDFSError(err) } if err != nil { return ListDeletedPathsSegmentResponse{}, err } resp, err := fs.generatedFSClientWithDFS().InternalClient().Pipeline().Do(req) + err = exported.ConvertToDFSError(err) if err != nil { return ListDeletedPathsSegmentResponse{}, err } @@ -288,6 +330,7 @@ func (fs *Client) GetSASURL(permissions sas.FilesystemPermissions, expiry time.T } st := o.format() urlParts, err := azdatalake.ParseURL(fs.BlobURL()) + err = exported.ConvertToDFSError(err) if err != nil { return "", err } @@ -299,6 +342,7 @@ func (fs *Client) GetSASURL(permissions sas.FilesystemPermissions, expiry time.T StartTime: st, ExpiryTime: expiry.UTC(), }.SignWithSharedKey(fs.sharedKey()) + err = exported.ConvertToDFSError(err) if err != nil { return "", err } diff --git a/sdk/storage/azdatalake/filesystem/client_test.go b/sdk/storage/azdatalake/filesystem/client_test.go index 3615506e870e..37df43b164fe 100644 --- a/sdk/storage/azdatalake/filesystem/client_test.go +++ b/sdk/storage/azdatalake/filesystem/client_test.go @@ -63,7 +63,7 @@ func validateFilesystemDeleted(_require *require.Assertions, filesystemClient *f _, err := filesystemClient.GetAccessPolicy(context.Background(), nil) _require.NotNil(err) - testcommon.ValidateErrorCode(_require, err, datalakeerror.ContainerNotFound) + testcommon.ValidateErrorCode(_require, err, datalakeerror.FilesystemNotFound) } func (s *RecordedTestSuite) TestCreateFilesystem() { @@ -224,7 +224,7 @@ func (s *RecordedTestSuite) TestFilesystemDeleteNonExistent() { _, err = fsClient.Delete(context.Background(), nil) _require.NotNil(err) - testcommon.ValidateErrorCode(_require, err, datalakeerror.ContainerNotFound) + testcommon.ValidateErrorCode(_require, err, datalakeerror.FilesystemNotFound) } func (s *RecordedTestSuite) TestFilesystemDeleteIfModifiedSinceTrue() { @@ -432,7 +432,7 @@ func (s *RecordedTestSuite) TestFilesystemSetMetadataNonExistent() { _, err = fsClient.SetMetadata(context.Background(), nil) _require.NotNil(err) - testcommon.ValidateErrorCode(_require, err, datalakeerror.ContainerNotFound) + testcommon.ValidateErrorCode(_require, err, datalakeerror.FilesystemNotFound) } func (s *RecordedTestSuite) TestSetEmptyAccessPolicy() { diff --git a/sdk/storage/azdatalake/filesystem/constants.go b/sdk/storage/azdatalake/filesystem/constants.go index f7ff23ec01cd..3e0c373b87a1 100644 --- a/sdk/storage/azdatalake/filesystem/constants.go +++ b/sdk/storage/azdatalake/filesystem/constants.go @@ -7,7 +7,6 @@ package filesystem import "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob" -import "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" // PublicAccessType defines values for AccessType - private (default) or file or filesystem. type PublicAccessType = azblob.PublicAccessType @@ -16,47 +15,3 @@ const ( File PublicAccessType = azblob.PublicAccessTypeBlob Filesystem PublicAccessType = azblob.PublicAccessTypeContainer ) - -// TODO: figure out a way to import this from datalake rather than blob again - -// StatusType defines values for StatusType -type StatusType = lease.StatusType - -const ( - StatusTypeLocked StatusType = lease.StatusTypeLocked - StatusTypeUnlocked StatusType = lease.StatusTypeUnlocked -) - -// PossibleStatusTypeValues returns the possible values for the StatusType const type. -func PossibleStatusTypeValues() []StatusType { - return lease.PossibleStatusTypeValues() -} - -// DurationType defines values for DurationType -type DurationType = lease.DurationType - -const ( - DurationTypeInfinite DurationType = lease.DurationTypeInfinite - DurationTypeFixed DurationType = lease.DurationTypeFixed -) - -// PossibleDurationTypeValues returns the possible values for the DurationType const type. -func PossibleDurationTypeValues() []DurationType { - return lease.PossibleDurationTypeValues() -} - -// StateType defines values for StateType -type StateType = lease.StateType - -const ( - StateTypeAvailable StateType = lease.StateTypeAvailable - StateTypeLeased StateType = lease.StateTypeLeased - StateTypeExpired StateType = lease.StateTypeExpired - StateTypeBreaking StateType = lease.StateTypeBreaking - StateTypeBroken StateType = lease.StateTypeBroken -) - -// PossibleStateTypeValues returns the possible values for the StateType const type. -func PossibleStateTypeValues() []StateType { - return lease.PossibleStateTypeValues() -} diff --git a/sdk/storage/azdatalake/filesystem/responses.go b/sdk/storage/azdatalake/filesystem/responses.go index 9a2112657bcc..d32f0aed538d 100644 --- a/sdk/storage/azdatalake/filesystem/responses.go +++ b/sdk/storage/azdatalake/filesystem/responses.go @@ -9,6 +9,7 @@ package filesystem import ( "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "time" ) @@ -98,13 +99,13 @@ type GetPropertiesResponse struct { LastModified *time.Time // LeaseDuration contains the information returned from the x-ms-lease-duration header response. - LeaseDuration *DurationType + LeaseDuration *azdatalake.DurationType // LeaseState contains the information returned from the x-ms-lease-state header response. - LeaseState *StateType + LeaseState *azdatalake.StateType // LeaseStatus contains the information returned from the x-ms-lease-status header response. - LeaseStatus *StatusType + LeaseStatus *azdatalake.StatusType // Metadata contains the information returned from the x-ms-meta header response. Metadata map[string]*string @@ -116,6 +117,7 @@ type GetPropertiesResponse struct { Version *string } +// removes the blob prefix in access type func formatFilesystemProperties(r *GetPropertiesResponse, contResp *container.GetPropertiesResponse) { r.PublicAccess = contResp.BlobPublicAccess r.ClientRequestID = contResp.ClientRequestID diff --git a/sdk/storage/azdatalake/go.mod b/sdk/storage/azdatalake/go.mod index b1b48ee155fd..17498bc501b5 100644 --- a/sdk/storage/azdatalake/go.mod +++ b/sdk/storage/azdatalake/go.mod @@ -3,7 +3,7 @@ module github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake go 1.18 require ( - github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1 + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 github.com/stretchr/testify v1.7.1 diff --git a/sdk/storage/azdatalake/go.sum b/sdk/storage/azdatalake/go.sum index 911682659b2b..3482ec48d6a5 100644 --- a/sdk/storage/azdatalake/go.sum +++ b/sdk/storage/azdatalake/go.sum @@ -1,5 +1,5 @@ -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1 h1:SEy2xmstIphdPwNBUi7uhvjyjhVKISfwjfOJmuy7kg4= -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 h1:8q4SaHjFsClSvuVne0ID/5Ka8u3fcIHyqkLjcFpNRHQ= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q= github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8= github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY= github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM= diff --git a/sdk/storage/azdatalake/internal/base/clients.go b/sdk/storage/azdatalake/internal/base/clients.go index c80152dc4b83..cab52ce400b2 100644 --- a/sdk/storage/azdatalake/internal/base/clients.go +++ b/sdk/storage/azdatalake/internal/base/clients.go @@ -36,9 +36,10 @@ type CompositeClient[T, K, U any] struct { // generated client with blob innerK *K // blob client - innerU *U - sharedKey *exported.SharedKeyCredential - options *ClientOptions + innerU *U + sharedKey *exported.SharedKeyCredential + identityCred *azcore.TokenCredential + options *ClientOptions } func InnerClients[T, K, U any](client *CompositeClient[T, K, U]) (*T, *K, *U) { @@ -49,33 +50,40 @@ func SharedKeyComposite[T, K, U any](client *CompositeClient[T, K, U]) *exported return client.sharedKey } -func NewFilesystemClient(fsURL string, fsURLWithBlobEndpoint string, client *container.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client] { +func IdentityCredentialComposite[T, K, U any](client *CompositeClient[T, K, U]) *azcore.TokenCredential { + return client.identityCred +} + +func NewFilesystemClient(fsURL string, fsURLWithBlobEndpoint string, client *container.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, identityCred *azcore.TokenCredential, options *ClientOptions) *CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client] { return &CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client]{ - innerT: generated.NewFilesystemClient(fsURL, azClient), - innerK: generated.NewFilesystemClient(fsURLWithBlobEndpoint, azClient), - sharedKey: sharedKey, - innerU: client, - options: options, + innerT: generated.NewFilesystemClient(fsURL, azClient), + innerK: generated.NewFilesystemClient(fsURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + identityCred: identityCred, + innerU: client, + options: options, } } -func NewServiceClient(serviceURL string, serviceURLWithBlobEndpoint string, client *service.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client] { +func NewServiceClient(serviceURL string, serviceURLWithBlobEndpoint string, client *service.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, identityCred *azcore.TokenCredential, options *ClientOptions) *CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client] { return &CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client]{ - innerT: generated.NewServiceClient(serviceURL, azClient), - innerK: generated.NewServiceClient(serviceURLWithBlobEndpoint, azClient), - sharedKey: sharedKey, - innerU: client, - options: options, + innerT: generated.NewServiceClient(serviceURL, azClient), + innerK: generated.NewServiceClient(serviceURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + identityCred: identityCred, + innerU: client, + options: options, } } -func NewPathClient(pathURL string, pathURLWithBlobEndpoint string, client *blockblob.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client] { +func NewPathClient(pathURL string, pathURLWithBlobEndpoint string, client *blockblob.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, identityCred *azcore.TokenCredential, options *ClientOptions) *CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client] { return &CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client]{ - innerT: generated.NewPathClient(pathURL, azClient), - innerK: generated.NewPathClient(pathURLWithBlobEndpoint, azClient), - sharedKey: sharedKey, - innerU: client, - options: options, + innerT: generated.NewPathClient(pathURL, azClient), + innerK: generated.NewPathClient(pathURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + identityCred: identityCred, + innerU: client, + options: options, } } diff --git a/sdk/storage/azdatalake/internal/exported/exported.go b/sdk/storage/azdatalake/internal/exported/exported.go index 6a91ea05453a..ab0c5ebea4cf 100644 --- a/sdk/storage/azdatalake/internal/exported/exported.go +++ b/sdk/storage/azdatalake/internal/exported/exported.go @@ -7,8 +7,11 @@ package exported import ( + "errors" "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" "strconv" + "strings" ) const SnapshotTimeFormat = "2006-01-02T15:04:05.0000000Z07:00" @@ -33,3 +36,19 @@ func FormatHTTPRange(r HTTPRange) *string { dataRange := fmt.Sprintf("bytes=%v-%s", r.Offset, endOffset) return &dataRange } + +func ConvertToDFSError(err error) error { + if err == nil { + return nil + } + var responseErr *azcore.ResponseError + isRespErr := errors.As(err, &responseErr) + if isRespErr { + responseErr.ErrorCode = strings.Replace(responseErr.ErrorCode, "blob", "path", -1) + responseErr.ErrorCode = strings.Replace(responseErr.ErrorCode, "Blob", "Path", -1) + responseErr.ErrorCode = strings.Replace(responseErr.ErrorCode, "container", "filesystem", -1) + responseErr.ErrorCode = strings.Replace(responseErr.ErrorCode, "Container", "Filesystem", -1) + return responseErr + } + return err +} diff --git a/sdk/storage/azdatalake/internal/exported/path.go b/sdk/storage/azdatalake/internal/exported/path.go deleted file mode 100644 index eabd8aa3ddaa..000000000000 --- a/sdk/storage/azdatalake/internal/exported/path.go +++ /dev/null @@ -1 +0,0 @@ -package exported diff --git a/sdk/storage/azdatalake/internal/exported/user_delegation_credential.go b/sdk/storage/azdatalake/internal/exported/user_delegation_credential.go index 91b933bf5737..047e265e046e 100644 --- a/sdk/storage/azdatalake/internal/exported/user_delegation_credential.go +++ b/sdk/storage/azdatalake/internal/exported/user_delegation_credential.go @@ -10,7 +10,7 @@ import ( "crypto/hmac" "crypto/sha256" "encoding/base64" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" ) // NewUserDelegationCredential creates a new UserDelegationCredential using a Storage account's Name and a user delegation Key from it @@ -22,7 +22,7 @@ func NewUserDelegationCredential(accountName string, udk UserDelegationKey) *Use } // UserDelegationKey contains UserDelegationKey. -type UserDelegationKey = service.UserDelegationKey +type UserDelegationKey = generated.UserDelegationKey // UserDelegationCredential contains an account's name and its user delegation key. type UserDelegationCredential struct { diff --git a/sdk/storage/azdatalake/internal/generated/user_delegation_key.go b/sdk/storage/azdatalake/internal/generated/user_delegation_key.go new file mode 100644 index 000000000000..367765fa19cd --- /dev/null +++ b/sdk/storage/azdatalake/internal/generated/user_delegation_key.go @@ -0,0 +1,144 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package generated + +import ( + "context" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "strconv" + "time" +) + +// KeyInfo - Key information +type KeyInfo struct { + // REQUIRED; The date-time the key expires in ISO 8601 UTC time + Expiry *string `xml:"Expiry"` + + // REQUIRED; The date-time the key is active in ISO 8601 UTC time + Start *string `xml:"Start"` +} + +// ServiceClientGetUserDelegationKeyOptions contains the optional parameters for the ServiceClient.GetUserDelegationKey method. +type ServiceClientGetUserDelegationKeyOptions struct { + // Provides a client-generated, opaque value with a 1 KB character limit that is recorded in the analytics logs when storage + // analytics logging is enabled. + RequestID *string + // The timeout parameter is expressed in seconds. For more information, see Setting Timeouts for Blob Service Operations. + // [https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/setting-timeouts-for-blob-service-operations] + Timeout *int32 +} + +// ServiceClientGetUserDelegationKeyResponse contains the response from method ServiceClient.GetUserDelegationKey. +type ServiceClientGetUserDelegationKeyResponse struct { + UserDelegationKey + // ClientRequestID contains the information returned from the x-ms-client-request-id header response. + ClientRequestID *string `xml:"ClientRequestID"` + + // Date contains the information returned from the Date header response. + Date *time.Time `xml:"Date"` + + // RequestID contains the information returned from the x-ms-request-id header response. + RequestID *string `xml:"RequestID"` + + // Version contains the information returned from the x-ms-version header response. + Version *string `xml:"Version"` +} + +// UserDelegationKey - A user delegation key +type UserDelegationKey struct { + // REQUIRED; The date-time the key expires + SignedExpiry *time.Time `xml:"SignedExpiry"` + + // REQUIRED; The Azure Active Directory object ID in GUID format. + SignedOID *string `xml:"SignedOid"` + + // REQUIRED; Abbreviation of the Azure Storage service that accepts the key + SignedService *string `xml:"SignedService"` + + // REQUIRED; The date-time the key is active + SignedStart *time.Time `xml:"SignedStart"` + + // REQUIRED; The Azure Active Directory tenant ID in GUID format + SignedTID *string `xml:"SignedTid"` + + // REQUIRED; The service version that created the key + SignedVersion *string `xml:"SignedVersion"` + + // REQUIRED; The key as a base64 string + Value *string `xml:"Value"` +} + +// GetUserDelegationKey - Retrieves a user delegation key for the Blob service. This is only a valid operation when using +// bearer token authentication. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2020-10-02 +// - keyInfo - Key information +// - options - ServiceClientGetUserDelegationKeyOptions contains the optional parameters for the ServiceClient.GetUserDelegationKey +// method. +func (client *ServiceClient) GetUserDelegationKey(ctx context.Context, keyInfo KeyInfo, options *ServiceClientGetUserDelegationKeyOptions) (ServiceClientGetUserDelegationKeyResponse, error) { + req, err := client.getUserDelegationKeyCreateRequest(ctx, keyInfo, options) + if err != nil { + return ServiceClientGetUserDelegationKeyResponse{}, err + } + resp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ServiceClientGetUserDelegationKeyResponse{}, err + } + if !runtime.HasStatusCode(resp, http.StatusOK) { + return ServiceClientGetUserDelegationKeyResponse{}, runtime.NewResponseError(resp) + } + return client.getUserDelegationKeyHandleResponse(resp) +} + +// getUserDelegationKeyCreateRequest creates the GetUserDelegationKey request. +func (client *ServiceClient) getUserDelegationKeyCreateRequest(ctx context.Context, keyInfo KeyInfo, options *ServiceClientGetUserDelegationKeyOptions) (*policy.Request, error) { + req, err := runtime.NewRequest(ctx, http.MethodPost, client.endpoint) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("restype", "service") + reqQP.Set("comp", "userdelegationkey") + if options != nil && options.Timeout != nil { + reqQP.Set("timeout", strconv.FormatInt(int64(*options.Timeout), 10)) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["x-ms-version"] = []string{"2020-10-02"} + if options != nil && options.RequestID != nil { + req.Raw().Header["x-ms-client-request-id"] = []string{*options.RequestID} + } + req.Raw().Header["Accept"] = []string{"application/xml"} + return req, runtime.MarshalAsXML(req, keyInfo) +} + +// getUserDelegationKeyHandleResponse handles the GetUserDelegationKey response. +func (client *ServiceClient) getUserDelegationKeyHandleResponse(resp *http.Response) (ServiceClientGetUserDelegationKeyResponse, error) { + result := ServiceClientGetUserDelegationKeyResponse{} + if val := resp.Header.Get("x-ms-client-request-id"); val != "" { + result.ClientRequestID = &val + } + if val := resp.Header.Get("x-ms-request-id"); val != "" { + result.RequestID = &val + } + if val := resp.Header.Get("x-ms-version"); val != "" { + result.Version = &val + } + if val := resp.Header.Get("Date"); val != "" { + date, err := time.Parse(time.RFC1123, val) + if err != nil { + return ServiceClientGetUserDelegationKeyResponse{}, err + } + result.Date = &date + } + if err := runtime.UnmarshalAsXML(resp, &result.UserDelegationKey); err != nil { + return ServiceClientGetUserDelegationKeyResponse{}, err + } + return result, nil +} diff --git a/sdk/storage/azdatalake/internal/path/constants.go b/sdk/storage/azdatalake/internal/path/constants.go new file mode 100644 index 000000000000..7dd11049e38e --- /dev/null +++ b/sdk/storage/azdatalake/internal/path/constants.go @@ -0,0 +1,35 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package path + +import "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + +// EncryptionAlgorithmType defines values for EncryptionAlgorithmType. +type EncryptionAlgorithmType = blob.EncryptionAlgorithmType + +const ( + EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone + EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 +) + +type ImmutabilityPolicyMode = blob.ImmutabilityPolicyMode + +const ( + ImmutabilityPolicyModeMutable ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeMutable + ImmutabilityPolicyModeUnlocked ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeUnlocked + ImmutabilityPolicyModeLocked ImmutabilityPolicyMode = blob.ImmutabilityPolicyModeLocked +) + +// CopyStatusType defines values for CopyStatusType +type CopyStatusType = blob.CopyStatusType + +const ( + CopyStatusTypePending CopyStatusType = blob.CopyStatusTypePending + CopyStatusTypeSuccess CopyStatusType = blob.CopyStatusTypeSuccess + CopyStatusTypeAborted CopyStatusType = blob.CopyStatusTypeAborted + CopyStatusTypeFailed CopyStatusType = blob.CopyStatusTypeFailed +) diff --git a/sdk/storage/azdatalake/internal/path/models.go b/sdk/storage/azdatalake/internal/path/models.go new file mode 100644 index 000000000000..893bc9d40d19 --- /dev/null +++ b/sdk/storage/azdatalake/internal/path/models.go @@ -0,0 +1,243 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package path + +import ( + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "time" +) + +// GetPropertiesOptions contains the optional parameters for the Client.GetProperties method +type GetPropertiesOptions struct { + AccessConditions *AccessConditions + CPKInfo *CPKInfo +} + +func FormatGetPropertiesOptions(o *GetPropertiesOptions) *blob.GetPropertiesOptions { + if o == nil { + return nil + } + accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) + return &blob.GetPropertiesOptions{ + AccessConditions: accessConditions, + CPKInfo: &blob.CPKInfo{ + EncryptionKey: o.CPKInfo.EncryptionKey, + EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, + EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, + }, + } +} + +// ===================================== PATH IMPORTS =========================================== + +// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint +type SetAccessControlOptions struct { + // Owner is the owner of the path. + Owner *string + // Group is the owning group of the path. + Group *string + // ACL is the access control list for the path. + ACL *string + // Permissions is the octal representation of the permissions for user, group and mask. + Permissions *string + // AccessConditions contains parameters for accessing the path. + AccessConditions *AccessConditions +} + +func FormatSetAccessControlOptions(o *SetAccessControlOptions) (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { + if o == nil { + return nil, nil, nil, datalakeerror.MissingParameters + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) + if o.Owner == nil && o.Group == nil && o.ACL == nil && o.Permissions == nil { + return nil, nil, nil, errors.New("at least one parameter should be set for SetAccessControl API") + } + return &generated.PathClientSetAccessControlOptions{ + Owner: o.Owner, + Group: o.Group, + ACL: o.ACL, + Permissions: o.Permissions, + }, leaseAccessConditions, modifiedAccessConditions, nil +} + +// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. +type GetAccessControlOptions struct { + // UPN is the user principal name. + UPN *bool + // AccessConditions contains parameters for accessing the path. + AccessConditions *AccessConditions +} + +func FormatGetAccessControlOptions(o *GetAccessControlOptions) (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions) { + action := generated.PathGetPropertiesActionGetAccessControl + if o == nil { + return &generated.PathClientGetPropertiesOptions{ + Action: &action, + }, nil, nil + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := exported.FormatPathAccessConditions(o.AccessConditions) + return &generated.PathClientGetPropertiesOptions{ + Upn: o.UPN, + Action: &action, + }, leaseAccessConditions, modifiedAccessConditions +} + +// CPKInfo contains a group of parameters for the PathClient.Download method. +type CPKInfo struct { + EncryptionAlgorithm *EncryptionAlgorithmType + EncryptionKey *string + EncryptionKeySHA256 *string +} + +// GetSASURLOptions contains the optional parameters for the Client.GetSASURL method. +type GetSASURLOptions struct { + StartTime *time.Time +} + +func FormatGetSASURLOptions(o *GetSASURLOptions) time.Time { + if o == nil { + return time.Time{} + } + + var st time.Time + if o.StartTime != nil { + st = o.StartTime.UTC() + } else { + st = time.Time{} + } + return st +} + +// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. +type SetHTTPHeadersOptions struct { + AccessConditions *AccessConditions +} + +func FormatSetHTTPHeadersOptions(o *SetHTTPHeadersOptions, httpHeaders HTTPHeaders) (*blob.SetHTTPHeadersOptions, blob.HTTPHeaders) { + httpHeaderOpts := blob.HTTPHeaders{ + BlobCacheControl: httpHeaders.CacheControl, + BlobContentDisposition: httpHeaders.ContentDisposition, + BlobContentEncoding: httpHeaders.ContentEncoding, + BlobContentLanguage: httpHeaders.ContentLanguage, + BlobContentMD5: httpHeaders.ContentMD5, + BlobContentType: httpHeaders.ContentType, + } + if o == nil { + return nil, httpHeaderOpts + } + accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) + return &blob.SetHTTPHeadersOptions{ + AccessConditions: accessConditions, + }, httpHeaderOpts +} + +// HTTPHeaders contains the HTTP headers for path operations. +type HTTPHeaders struct { + // Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. + CacheControl *string + // Sets the path's Content-Disposition header. + ContentDisposition *string + // Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read + // request. + ContentEncoding *string + // Set the path's content language. If specified, this property is stored with the path and returned with a read + // request. + ContentLanguage *string + // Specify the transactional md5 for the body, to be validated by the service. + ContentMD5 []byte + // Sets the path's content type. If specified, this property is stored with the path and returned with a read request. + ContentType *string +} + +// +//func (o HTTPHeaders) formatBlobHTTPHeaders() blob.HTTPHeaders { +// +// opts := blob.HTTPHeaders{ +// BlobCacheControl: o.CacheControl, +// BlobContentDisposition: o.ContentDisposition, +// BlobContentEncoding: o.ContentEncoding, +// BlobContentLanguage: o.ContentLanguage, +// BlobContentMD5: o.ContentMD5, +// BlobContentType: o.ContentType, +// } +// return opts +//} + +func FormatPathHTTPHeaders(o *HTTPHeaders) *generated.PathHTTPHeaders { + // TODO: will be used for file related ops, like append + if o == nil { + return nil + } + opts := generated.PathHTTPHeaders{ + CacheControl: o.CacheControl, + ContentDisposition: o.ContentDisposition, + ContentEncoding: o.ContentEncoding, + ContentLanguage: o.ContentLanguage, + ContentMD5: o.ContentMD5, + ContentType: o.ContentType, + TransactionalContentHash: o.ContentMD5, + } + return &opts +} + +// SetMetadataOptions provides set of configurations for Set Metadata on path operation +type SetMetadataOptions struct { + Metadata map[string]*string + AccessConditions *AccessConditions + CPKInfo *CPKInfo + CPKScopeInfo *CPKScopeInfo +} + +func FormatSetMetadataOptions(o *SetMetadataOptions) (*blob.SetMetadataOptions, map[string]*string) { + if o == nil { + return nil, nil + } + accessConditions := exported.FormatBlobAccessConditions(o.AccessConditions) + opts := &blob.SetMetadataOptions{ + AccessConditions: accessConditions, + } + if o.CPKInfo != nil { + opts.CPKInfo = &blob.CPKInfo{ + EncryptionKey: o.CPKInfo.EncryptionKey, + EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, + EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, + } + } + if o.CPKScopeInfo != nil { + opts.CPKScopeInfo = (*blob.CPKScopeInfo)(o.CPKScopeInfo) + } + return opts, o.Metadata +} + +// ========================================= constants ========================================= + +// SharedKeyCredential contains an account's name and its primary or secondary key. +type SharedKeyCredential = exported.SharedKeyCredential + +// AccessConditions identifies access conditions which you optionally set. +type AccessConditions = exported.AccessConditions + +// SourceAccessConditions identifies source access conditions which you optionally set. +type SourceAccessConditions = exported.SourceAccessConditions + +// LeaseAccessConditions contains optional parameters to access leased entity. +type LeaseAccessConditions = exported.LeaseAccessConditions + +// ModifiedAccessConditions contains a group of parameters for specifying access conditions. +type ModifiedAccessConditions = exported.ModifiedAccessConditions + +// SourceModifiedAccessConditions contains a group of parameters for specifying access conditions. +type SourceModifiedAccessConditions = exported.SourceModifiedAccessConditions + +// CPKScopeInfo contains a group of parameters for the Client.SetMetadata() method. +type CPKScopeInfo blob.CPKScopeInfo diff --git a/sdk/storage/azdatalake/internal/path/responses.go b/sdk/storage/azdatalake/internal/path/responses.go new file mode 100644 index 000000000000..915d7b104374 --- /dev/null +++ b/sdk/storage/azdatalake/internal/path/responses.go @@ -0,0 +1,269 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package path + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "net/http" + "time" +) + +// SetAccessControlResponse contains the response fields for the SetAccessControl operation. +type SetAccessControlResponse = generated.PathClientSetAccessControlResponse + +// GetAccessControlResponse contains the response fields for the GetAccessControl operation. +type GetAccessControlResponse = generated.PathClientGetPropertiesResponse + +// TODO: removed BlobSequenceNumber, BlobCommittedBlockCount and BlobType headers from the original response: + +// GetPropertiesResponse contains the response fields for the GetProperties operation. +type GetPropertiesResponse struct { + // AcceptRanges contains the information returned from the Accept-Ranges header response. + AcceptRanges *string + + // AccessTier contains the information returned from the x-ms-access-tier header response. + AccessTier *string + + // AccessTierChangeTime contains the information returned from the x-ms-access-tier-change-time header response. + AccessTierChangeTime *time.Time + + // AccessTierInferred contains the information returned from the x-ms-access-tier-inferred header response. + AccessTierInferred *bool + + // ArchiveStatus contains the information returned from the x-ms-archive-status header response. + ArchiveStatus *string + + // CacheControl contains the information returned from the Cache-Control header response. + CacheControl *string + + // ClientRequestID contains the information returned from the x-ms-client-request-id header response. + ClientRequestID *string + + // ContentDisposition contains the information returned from the Content-Disposition header response. + ContentDisposition *string + + // ContentEncoding contains the information returned from the Content-Encoding header response. + ContentEncoding *string + + // ContentLanguage contains the information returned from the Content-Language header response. + ContentLanguage *string + + // ContentLength contains the information returned from the Content-Length header response. + ContentLength *int64 + + // ContentMD5 contains the information returned from the Content-MD5 header response. + ContentMD5 []byte + + // ContentType contains the information returned from the Content-Type header response. + ContentType *string + + // CopyCompletionTime contains the information returned from the x-ms-copy-completion-time header response. + CopyCompletionTime *time.Time + + // CopyID contains the information returned from the x-ms-copy-id header response. + CopyID *string + + // CopyProgress contains the information returned from the x-ms-copy-progress header response. + CopyProgress *string + + // CopySource contains the information returned from the x-ms-copy-source header response. + CopySource *string + + // CopyStatus contains the information returned from the x-ms-copy-status header response. + CopyStatus *CopyStatusType + + // CopyStatusDescription contains the information returned from the x-ms-copy-status-description header response. + CopyStatusDescription *string + + // CreationTime contains the information returned from the x-ms-creation-time header response. + CreationTime *time.Time + + // Date contains the information returned from the Date header response. + Date *time.Time + + // DestinationSnapshot contains the information returned from the x-ms-copy-destination-snapshot header response. + DestinationSnapshot *string + + // ETag contains the information returned from the ETag header response. + ETag *azcore.ETag + + // EncryptionKeySHA256 contains the information returned from the x-ms-encryption-key-sha256 header response. + EncryptionKeySHA256 *string + + // EncryptionScope contains the information returned from the x-ms-encryption-scope header response. + EncryptionScope *string + + // ExpiresOn contains the information returned from the x-ms-expiry-time header response. + ExpiresOn *time.Time + + // ImmutabilityPolicyExpiresOn contains the information returned from the x-ms-immutability-policy-until-date header response. + ImmutabilityPolicyExpiresOn *time.Time + + // ImmutabilityPolicyMode contains the information returned from the x-ms-immutability-policy-mode header response. + ImmutabilityPolicyMode *ImmutabilityPolicyMode + + // IsCurrentVersion contains the information returned from the x-ms-is-current-version header response. + IsCurrentVersion *bool + + // IsIncrementalCopy contains the information returned from the x-ms-incremental-copy header response. + IsIncrementalCopy *bool + + // IsSealed contains the information returned from the x-ms-blob-sealed header response. + IsSealed *bool + + // IsServerEncrypted contains the information returned from the x-ms-server-encrypted header response. + IsServerEncrypted *bool + + // LastAccessed contains the information returned from the x-ms-last-access-time header response. + LastAccessed *time.Time + + // LastModified contains the information returned from the Last-Modified header response. + LastModified *time.Time + + // LeaseDuration contains the information returned from the x-ms-lease-duration header response. + LeaseDuration *azdatalake.DurationType + + // LeaseState contains the information returned from the x-ms-lease-state header response. + LeaseState *azdatalake.StateType + + // LeaseStatus contains the information returned from the x-ms-lease-status header response. + LeaseStatus *azdatalake.StatusType + + // LegalHold contains the information returned from the x-ms-legal-hold header response. + LegalHold *bool + + // Metadata contains the information returned from the x-ms-meta header response. + Metadata map[string]*string + + // ObjectReplicationPolicyID contains the information returned from the x-ms-or-policy-id header response. + ObjectReplicationPolicyID *string + + // ObjectReplicationRules contains the information returned from the x-ms-or header response. + ObjectReplicationRules map[string]*string + + // RehydratePriority contains the information returned from the x-ms-rehydrate-priority header response. + RehydratePriority *string + + // RequestID contains the information returned from the x-ms-request-id header response. + RequestID *string + + // TagCount contains the information returned from the x-ms-tag-count header response. + TagCount *int64 + + // Version contains the information returned from the x-ms-version header response. + Version *string + + // VersionID contains the information returned from the x-ms-version-id header response. + VersionID *string + + // Owner contains the information returned from the x-ms-owner header response. + Owner *string + + // Group contains the information returned from the x-ms-group header response. + Group *string + + // Permissions contains the information returned from the x-ms-permissions header response. + Permissions *string +} + +func FormatGetPropertiesResponse(r *blob.GetPropertiesResponse, rawResponse *http.Response) GetPropertiesResponse { + newResp := GetPropertiesResponse{} + newResp.AcceptRanges = r.AcceptRanges + newResp.AccessTier = r.AccessTier + newResp.AccessTierChangeTime = r.AccessTierChangeTime + newResp.AccessTierInferred = r.AccessTierInferred + newResp.ArchiveStatus = r.ArchiveStatus + newResp.CacheControl = r.CacheControl + newResp.ClientRequestID = r.ClientRequestID + newResp.ContentDisposition = r.ContentDisposition + newResp.ContentEncoding = r.ContentEncoding + newResp.ContentLanguage = r.ContentLanguage + newResp.ContentLength = r.ContentLength + newResp.ContentMD5 = r.ContentMD5 + newResp.ContentType = r.ContentType + newResp.CopyCompletionTime = r.CopyCompletionTime + newResp.CopyID = r.CopyID + newResp.CopyProgress = r.CopyProgress + newResp.CopySource = r.CopySource + newResp.CopyStatus = r.CopyStatus + newResp.CopyStatusDescription = r.CopyStatusDescription + newResp.CreationTime = r.CreationTime + newResp.Date = r.Date + newResp.DestinationSnapshot = r.DestinationSnapshot + newResp.ETag = r.ETag + newResp.EncryptionKeySHA256 = r.EncryptionKeySHA256 + newResp.EncryptionScope = r.EncryptionScope + newResp.ExpiresOn = r.ExpiresOn + newResp.ImmutabilityPolicyExpiresOn = r.ImmutabilityPolicyExpiresOn + newResp.ImmutabilityPolicyMode = r.ImmutabilityPolicyMode + newResp.IsCurrentVersion = r.IsCurrentVersion + newResp.IsIncrementalCopy = r.IsIncrementalCopy + newResp.IsSealed = r.IsSealed + newResp.IsServerEncrypted = r.IsServerEncrypted + newResp.LastAccessed = r.LastAccessed + newResp.LastModified = r.LastModified + newResp.LeaseDuration = r.LeaseDuration + newResp.LeaseState = r.LeaseState + newResp.LeaseStatus = r.LeaseStatus + newResp.LegalHold = r.LegalHold + newResp.Metadata = r.Metadata + newResp.ObjectReplicationPolicyID = r.ObjectReplicationPolicyID + newResp.ObjectReplicationRules = r.ObjectReplicationRules + newResp.RehydratePriority = r.RehydratePriority + newResp.RequestID = r.RequestID + newResp.TagCount = r.TagCount + newResp.Version = r.Version + newResp.VersionID = r.VersionID + if val := rawResponse.Header.Get("x-ms-owner"); val != "" { + newResp.Owner = &val + } + if val := rawResponse.Header.Get("x-ms-group"); val != "" { + newResp.Group = &val + } + if val := rawResponse.Header.Get("x-ms-permissions"); val != "" { + newResp.Permissions = &val + } + return newResp +} + +// SetMetadataResponse contains the response fields for the SetMetadata operation. +type SetMetadataResponse = blob.SetMetadataResponse + +// SetHTTPHeadersResponse contains the response from method Client.SetHTTPHeaders. +type SetHTTPHeadersResponse struct { + // ClientRequestID contains the information returned from the x-ms-client-request-id header response. + ClientRequestID *string + + // Date contains the information returned from the Date header response. + Date *time.Time + + // ETag contains the information returned from the ETag header response. + ETag *azcore.ETag + + // LastModified contains the information returned from the Last-Modified header response. + LastModified *time.Time + + // RequestID contains the information returned from the x-ms-request-id header response. + RequestID *string + + // Version contains the information returned from the x-ms-version header response. + Version *string +} + +// removes blob sequence number from response + +func FormatSetHTTPHeadersResponse(r *SetHTTPHeadersResponse, blobResp *blob.SetHTTPHeadersResponse) { + r.ClientRequestID = blobResp.ClientRequestID + r.Date = blobResp.Date + r.ETag = blobResp.ETag + r.LastModified = blobResp.LastModified + r.RequestID = blobResp.RequestID + r.Version = blobResp.Version +} diff --git a/sdk/storage/azdatalake/internal/testcommon/common.go b/sdk/storage/azdatalake/internal/testcommon/common.go index fcb537b98c52..1314309c5ac2 100644 --- a/sdk/storage/azdatalake/internal/testcommon/common.go +++ b/sdk/storage/azdatalake/internal/testcommon/common.go @@ -4,7 +4,7 @@ import ( "errors" "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/internal/recording" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/bloberror" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" "github.com/stretchr/testify/require" "os" "strings" @@ -66,7 +66,7 @@ func GetRequiredEnv(name string) (string, error) { } } -func ValidateErrorCode(_require *require.Assertions, err error, code bloberror.Code) { +func ValidateErrorCode(_require *require.Assertions, err error, code datalakeerror.StorageErrorCode) { _require.NotNil(err) var responseErr *azcore.ResponseError errors.As(err, &responseErr) diff --git a/sdk/storage/azdatalake/lease/constants.go b/sdk/storage/azdatalake/lease/constants.go deleted file mode 100644 index 04df4d58e7b1..000000000000 --- a/sdk/storage/azdatalake/lease/constants.go +++ /dev/null @@ -1,51 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package lease - -import "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" - -// StatusType defines values for StatusType -type StatusType = lease.StatusType - -const ( - StatusTypeLocked StatusType = lease.StatusTypeLocked - StatusTypeUnlocked StatusType = lease.StatusTypeUnlocked -) - -// PossibleStatusTypeValues returns the possible values for the StatusType const type. -func PossibleStatusTypeValues() []StatusType { - return lease.PossibleStatusTypeValues() -} - -// DurationType defines values for DurationType -type DurationType = lease.DurationType - -const ( - DurationTypeInfinite DurationType = lease.DurationTypeInfinite - DurationTypeFixed DurationType = lease.DurationTypeFixed -) - -// PossibleDurationTypeValues returns the possible values for the DurationType const type. -func PossibleDurationTypeValues() []DurationType { - return lease.PossibleDurationTypeValues() -} - -// StateType defines values for StateType -type StateType = lease.StateType - -const ( - StateTypeAvailable StateType = lease.StateTypeAvailable - StateTypeLeased StateType = lease.StateTypeLeased - StateTypeExpired StateType = lease.StateTypeExpired - StateTypeBreaking StateType = lease.StateTypeBreaking - StateTypeBroken StateType = lease.StateTypeBroken -) - -// PossibleStateTypeValues returns the possible values for the StateType const type. -func PossibleStateTypeValues() []StateType { - return lease.PossibleStateTypeValues() -} diff --git a/sdk/storage/azdatalake/service/client.go b/sdk/storage/azdatalake/service/client.go index 9b7e8721c8db..33e96821f3bd 100644 --- a/sdk/storage/azdatalake/service/client.go +++ b/sdk/storage/azdatalake/service/client.go @@ -11,14 +11,15 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "github.com/Azure/azure-sdk-for-go/sdk/internal/log" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" + "strings" "time" ) @@ -53,7 +54,7 @@ func NewClient(serviceURL string, cred azcore.TokenCredential, options *ClientOp ClientOptions: options.ClientOptions, } blobSvcClient, _ := service.NewClient(blobServiceURL, cred, &blobServiceClientOpts) - svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, nil, (*base.ClientOptions)(conOptions)) + svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, nil, &cred, (*base.ClientOptions)(conOptions)) return (*Client)(svcClient), nil } @@ -79,7 +80,7 @@ func NewClientWithNoCredential(serviceURL string, options *ClientOptions) (*Clie ClientOptions: options.ClientOptions, } blobSvcClient, _ := service.NewClientWithNoCredential(blobServiceURL, &blobServiceClientOpts) - svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, nil, (*base.ClientOptions)(conOptions)) + svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, nil, nil, (*base.ClientOptions)(conOptions)) return (*Client)(svcClient), nil } @@ -113,7 +114,7 @@ func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredenti return nil, err } blobSvcClient, _ := service.NewClientWithSharedKeyCredential(blobServiceURL, blobSharedKey, &blobServiceClientOpts) - svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, cred, (*base.ClientOptions)(conOptions)) + svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, cred, nil, (*base.ClientOptions)(conOptions)) return (*Client)(svcClient), nil } @@ -142,35 +143,29 @@ func (s *Client) getClientOptions() *base.ClientOptions { return base.GetCompositeClientOptions((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) } -// NewFilesystemClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. -// The new share.Client uses the same request policy pipeline as the Client. +// NewFilesystemClient creates a new filesystem.Client object by concatenating filesystemName to the end of this Client's URL. +// The new filesystem.Client uses the same request policy pipeline as the Client. func (s *Client) NewFilesystemClient(filesystemName string) *filesystem.Client { filesystemURL := runtime.JoinPaths(s.generatedServiceClientWithDFS().Endpoint(), filesystemName) - // TODO: remove new azcore.Client creation after the API for shallow copying with new client name is implemented - clOpts := s.getClientOptions() - azClient, err := azcore.NewClient(shared.FilesystemClient, exported.ModuleVersion, *(base.GetPipelineOptions(clOpts)), &(clOpts.ClientOptions)) - if err != nil { - if log.Should(exported.EventError) { - log.Writef(exported.EventError, err.Error()) - } - return nil - } filesystemURL, containerURL := shared.GetURLs(filesystemURL) - return (*filesystem.Client)(base.NewFilesystemClient(filesystemURL, containerURL, s.serviceClient().NewContainerClient(filesystemName), azClient, s.sharedKey(), clOpts)) + return (*filesystem.Client)(base.NewFilesystemClient(filesystemURL, containerURL, s.serviceClient().NewContainerClient(filesystemName), s.generatedServiceClientWithDFS().InternalClient().WithClientName(shared.FilesystemClient), s.sharedKey(), s.identityCredential(), s.getClientOptions())) } -// NewDirectoryClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. -// The new share.Client uses the same request policy pipeline as the Client. -func (s *Client) NewDirectoryClient(directoryName string) *filesystem.Client { - // TODO: implement once dir client is implemented - return nil -} +// GetUserDelegationCredential obtains a UserDelegationKey object using the base ServiceURL object. +// OAuth is required for this call, as well as any role that can delegate access to the storage account. +func (s *Client) GetUserDelegationCredential(ctx context.Context, info KeyInfo, o *GetUserDelegationCredentialOptions) (*UserDelegationCredential, error) { + url, err := azdatalake.ParseURL(s.BlobURL()) + if err != nil { + return nil, err + } -// NewFileClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. -// The new share.Client uses the same request policy pipeline as the Client. -func (s *Client) NewFileClient(fileName string) *filesystem.Client { - // TODO: implement once file client is implemented - return nil + getUserDelegationKeyOptions := o.format() + udk, err := s.generatedServiceClientWithBlob().GetUserDelegationKey(ctx, info, getUserDelegationKeyOptions) + if err != nil { + return nil, err + } + + return exported.NewUserDelegationCredential(strings.Split(url.Host, ".")[0], udk.UserDelegationKey), nil } func (s *Client) generatedServiceClientWithDFS() *generated.ServiceClient { @@ -192,6 +187,10 @@ func (s *Client) sharedKey() *exported.SharedKeyCredential { return base.SharedKeyComposite((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) } +func (s *Client) identityCredential() *azcore.TokenCredential { + return base.IdentityCredentialComposite((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) +} + // DFSURL returns the URL endpoint used by the Client object. func (s *Client) DFSURL() string { return s.generatedServiceClientWithDFS().Endpoint() @@ -206,6 +205,7 @@ func (s *Client) BlobURL() string { func (s *Client) CreateFilesystem(ctx context.Context, filesystem string, options *CreateFilesystemOptions) (CreateFilesystemResponse, error) { filesystemClient := s.NewFilesystemClient(filesystem) resp, err := filesystemClient.Create(ctx, options) + err = exported.ConvertToDFSError(err) return resp, err } @@ -213,19 +213,24 @@ func (s *Client) CreateFilesystem(ctx context.Context, filesystem string, option func (s *Client) DeleteFilesystem(ctx context.Context, filesystem string, options *DeleteFilesystemOptions) (DeleteFilesystemResponse, error) { filesystemClient := s.NewFilesystemClient(filesystem) resp, err := filesystemClient.Delete(ctx, options) + err = exported.ConvertToDFSError(err) return resp, err } // SetProperties sets properties for a storage account's File service endpoint. (blob3) func (s *Client) SetProperties(ctx context.Context, options *SetPropertiesOptions) (SetPropertiesResponse, error) { opts := options.format() - return s.serviceClient().SetProperties(ctx, opts) + resp, err := s.serviceClient().SetProperties(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } // GetProperties gets properties for a storage account's File service endpoint. (blob3) func (s *Client) GetProperties(ctx context.Context, options *GetPropertiesOptions) (GetPropertiesResponse, error) { opts := options.format() - return s.serviceClient().GetProperties(ctx, opts) + resp, err := s.serviceClient().GetProperties(ctx, opts) + err = exported.ConvertToDFSError(err) + return resp, err } @@ -244,6 +249,7 @@ func (s *Client) NewListFilesystemsPager(options *ListFilesystemsOptions) *runti } newPage := ListFilesystemsResponse{} currPage, err := page.blobPager.NextPage(context.TODO()) + err = exported.ConvertToDFSError(err) if err != nil { return newPage, err } @@ -265,12 +271,7 @@ func (s *Client) NewListFilesystemsPager(options *ListFilesystemsOptions) *runti func (s *Client) GetSASURL(resources sas.AccountResourceTypes, permissions sas.AccountPermissions, expiry time.Time, o *GetSASURLOptions) (string, error) { // format all options to blob service options res, perms, opts := o.format(resources, permissions) - return s.serviceClient().GetSASURL(res, perms, expiry, opts) + resp, err := s.serviceClient().GetSASURL(res, perms, expiry, opts) + err = exported.ConvertToDFSError(err) + return resp, err } - -// TODO: Figure out how we can convert from blob delegation key to one defined in datalake -//// GetUserDelegationCredential obtains a UserDelegationKey object using the base ServiceURL object. -//// OAuth is required for this call, as well as any role that can delegate access to the storage account. -//func (s *Client) GetUserDelegationCredential(ctx context.Context, info KeyInfo, o *GetUserDelegationCredentialOptions) (*UserDelegationCredential, error) { -// return s.serviceClient().GetUserDelegationCredential(ctx, info, o) -//} diff --git a/sdk/storage/azdatalake/service/client_test.go b/sdk/storage/azdatalake/service/client_test.go index a9541d4356a4..53fd574b31a7 100644 --- a/sdk/storage/azdatalake/service/client_test.go +++ b/sdk/storage/azdatalake/service/client_test.go @@ -16,7 +16,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/testcommon" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/lease" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/service" "github.com/stretchr/testify/require" @@ -465,6 +464,28 @@ func (s *ServiceUnrecordedTestsSuite) TestNoSharedKeyCredError() { } +func (s *ServiceRecordedTestsSuite) TestGetFilesystemClient() { + _require := require.New(s.T()) + testName := s.T().Name() + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY") + cred, err := azdatalake.NewSharedKeyCredential(accountName, accountKey) + _require.Nil(err) + + serviceClient, err := service.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), cred, nil) + _require.Nil(err) + + fsName := testcommon.GenerateFilesystemName(testName + "1") + fsClient := serviceClient.NewFilesystemClient(fsName) + + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + _, err = fsClient.Create(context.Background(), nil) + _require.Nil(err) + + _, err = fsClient.GetProperties(context.Background(), nil) + _require.Nil(err) +} + func (s *ServiceRecordedTestsSuite) TestSASFilesystemClient() { _require := require.New(s.T()) testName := s.T().Name() @@ -554,7 +575,7 @@ func (s *ServiceRecordedTestsSuite) TestListFilesystemsBasic() { } fsName := testcommon.GenerateFilesystemName(testName) - fsClient := testcommon.ServiceGetFilesystemClient(fsName, svcClient) + fsClient := svcClient.NewFilesystemClient(fsName) _, err = fsClient.Create(context.Background(), &filesystem.CreateOptions{Metadata: md}) defer func(fsClient *filesystem.Client, ctx context.Context, options *filesystem.DeleteOptions) { _, err := fsClient.Delete(ctx, options) @@ -578,8 +599,8 @@ func (s *ServiceRecordedTestsSuite) TestListFilesystemsBasic() { _require.NotNil(ctnr.Properties) _require.NotNil(ctnr.Properties.LastModified) _require.NotNil(ctnr.Properties.ETag) - _require.Equal(*ctnr.Properties.LeaseStatus, lease.StatusTypeUnlocked) - _require.Equal(*ctnr.Properties.LeaseState, lease.StateTypeAvailable) + _require.Equal(*ctnr.Properties.LeaseStatus, azdatalake.StatusTypeUnlocked) + _require.Equal(*ctnr.Properties.LeaseState, azdatalake.StateTypeAvailable) _require.Nil(ctnr.Properties.LeaseDuration) _require.Nil(ctnr.Properties.PublicAccess) _require.NotNil(ctnr.Metadata) @@ -639,8 +660,8 @@ func (s *ServiceRecordedTestsSuite) TestListFilesystemsBasicUsingConnectionStrin _require.NotNil(ctnr.Properties) _require.NotNil(ctnr.Properties.LastModified) _require.NotNil(ctnr.Properties.ETag) - _require.Equal(*ctnr.Properties.LeaseStatus, lease.StatusTypeUnlocked) - _require.Equal(*ctnr.Properties.LeaseState, lease.StateTypeAvailable) + _require.Equal(*ctnr.Properties.LeaseStatus, azdatalake.StatusTypeUnlocked) + _require.Equal(*ctnr.Properties.LeaseState, azdatalake.StateTypeAvailable) _require.Nil(ctnr.Properties.LeaseDuration) _require.Nil(ctnr.Properties.PublicAccess) _require.NotNil(ctnr.Metadata) diff --git a/sdk/storage/azdatalake/service/models.go b/sdk/storage/azdatalake/service/models.go index 7f8fb9c9bb6b..5efbd2b8c927 100644 --- a/sdk/storage/azdatalake/service/models.go +++ b/sdk/storage/azdatalake/service/models.go @@ -7,15 +7,18 @@ package service import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" "time" ) import blobSAS "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas" +// KeyInfo contains KeyInfo struct. +type KeyInfo = generated.KeyInfo + type CreateFilesystemOptions = filesystem.CreateOptions type DeleteFilesystemOptions = filesystem.DeleteOptions @@ -44,11 +47,17 @@ type StaticWebsite = service.StaticWebsite // SharedKeyCredential contains an account's name and its primary or secondary key. type SharedKeyCredential = exported.SharedKeyCredential -// GetUserDelegationCredentialOptions contains the optional parameters for the Client.GetUserDelegationCredential method. -type GetUserDelegationCredentialOptions = service.GetUserDelegationCredentialOptions +// PublicAccessType defines values for AccessType - private (default) or file or filesystem. +type PublicAccessType = filesystem.PublicAccessType -// KeyInfo contains KeyInfo struct. -type KeyInfo = service.KeyInfo +// GetUserDelegationCredentialOptions contains optional parameters for Service.GetUserDelegationKey method. +type GetUserDelegationCredentialOptions struct { + // placeholder for future options +} + +func (o *GetUserDelegationCredentialOptions) format() *generated.ServiceClientGetUserDelegationKeyOptions { + return nil +} // UserDelegationCredential contains an account's name and its user delegation key. type UserDelegationCredential = exported.UserDelegationCredential @@ -179,14 +188,3 @@ func (o *GetSASURLOptions) format(resources sas.AccountResourceTypes, permission StartTime: o.StartTime, } } - -// listing response models -// TODO: find another way to import these - -type LeaseDurationType = lease.DurationType - -type LeaseStateType = lease.StateType - -type LeaseStatusType = lease.StatusType - -type PublicAccessType = filesystem.PublicAccessType diff --git a/sdk/storage/azdatalake/service/responses.go b/sdk/storage/azdatalake/service/responses.go index 377532f3488f..6a6cecd23b81 100644 --- a/sdk/storage/azdatalake/service/responses.go +++ b/sdk/storage/azdatalake/service/responses.go @@ -13,6 +13,7 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "time" ) @@ -86,9 +87,9 @@ type FilesystemProperties struct { // Indicates if version level worm is enabled on this container. IsImmutableStorageWithVersioningEnabled *bool - LeaseDuration *LeaseDurationType - LeaseState *LeaseStateType - LeaseStatus *LeaseStatusType + LeaseDuration *azdatalake.DurationType + LeaseState *azdatalake.StateType + LeaseStatus *azdatalake.StatusType PreventEncryptionScopeOverride *bool PublicAccess *PublicAccessType RemainingRetentionDays *int32