Conversation
18592c1 to
989cfe0
Compare
|
:) |
| { | ||
| if (const auto it = cache.find(key); it != cache.end()) | ||
| { | ||
| if (IsStaleFunction()(it->first)) |
There was a problem hiding this comment.
This case is interesting. In case we find an exact match, but it has expired. Should we try to find a prefix match or simply update the entry?
There was a problem hiding this comment.
Well, there can be a more up-to-date prefix entry, so why not try to reuse it
There was a problem hiding this comment.
The only reason is that this entry would cease to exist. It would never be cached again. And it would become a linear search forever.
Actually, not forever, if the more up-to-date prefix entry gets evicted and this query is performed again, it would re-appear.
But I think you are right.
| { | ||
| throw Exception( | ||
| ErrorCodes::BAD_ARGUMENTS, | ||
| "Using glob iterator with path without globs is not allowed (used path: {})", |
There was a problem hiding this comment.
Shall not it be LOGICAL_ERROR ?
This looks like a branch of code where we cannot get normally (user does not select which iterator to use manually)
There was a problem hiding this comment.
I agree, it should probably be LOGICAL_ERROR. But:
This is mostly a copy and paste from the existing GlobIterator.
I might refactor this to avoid duplication. For now, this is just a draft implementation.
Even if I refactor this, I would opt for keeping parity with existing code & upstream. This will make the review and merges with upsteam easier
src/Core/Settings.cpp
Outdated
| - Positive integer. | ||
| - 0 — All hosts in cluster. | ||
| )", EXPERIMENTAL) \ | ||
| DECLARE(Bool, use_object_storage_list_objects_cache, true, R"( |
There was a problem hiding this comment.
please add it to the src/Core/SettingsChangesHistory.cpp
| cache.setMaxCount(count); | ||
| } | ||
|
|
||
| void ObjectStorageListObjectsCache::setTTL(std::size_t ttl_) |
There was a problem hiding this comment.
is it in seconds/miliseconds/minutes/hours?
There was a problem hiding this comment.
In seconds, will modify the argument name
| throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "The server was compiled without the support for Parquet"); | ||
| #endif | ||
| } | ||
| case Type::DROP_OBJECT_STORAGE_LIST_OBJECTS_CACHE: |
There was a problem hiding this comment.
Is caching works only on Parquet files or generally on any S3 ListObject requests?
There was a problem hiding this comment.
Ah, copy and paste issues. Should be any :D
|
The only functional difference from this version to the last reviewed one is that The reason for that is that we want to avoid collisions between different storage providers in the remote scenario where a bucket with the same name that contains the same directories exist in multiple object storage providers (i.e, aws s3, gcs and etc).
Input - s3://aws-public-blockchainn/v1.0/btc/transactions/**.parquet Input - http://aws-public-blockchainn.s3.amazonaws.com/v1.0/btc/transactions/date=2025-01*/*.parquet Input - https://storage.googleapiss.com/clickhouse-public-datasets/nyc-taxi/trips_{0..1}.gz
Input: Return- http://azurite1:10000/devstoreaccount1/ Input: Return- http://azurite1:10000/devstoreaccount1
The ones without example I have not had the time to test, I am sorry. Need to work on presentation for tomorrow. |
… gcs, aws) and azure only. Leaving out HDFS, Local and Web
Antalya: Cache the list objects operation on object storage using a TTL + prefix matching cache implementation
Antalya: Cache the list objects operation on object storage using a TTL + prefix matching cache implementation
Antalya: Cache the list objects operation on object storage using a TTL + prefix matching cache implementation
Antalya: Cache the list objects operation on object storage using a TTL + prefix matching cache implementation
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Cache for listobjects calls
Documentation entry for user-facing changes