Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebas #1

Merged
merged 110 commits into from
Mar 17, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
110 commits
Select commit Hold shift + click to select a range
90cad19
Introduce Builder for HiveQueryRunner
kokosing Mar 5, 2020
45137ba
Cleanup in HiveQueryRunner
kokosing Mar 5, 2020
6509248
Allow to set custom metastore in HiveQueryRunner
kokosing Mar 5, 2020
9f3d54d
Add check for Hive ACID table version
Mar 2, 2020
6ab2a35
Add test for CallTask
Feb 24, 2020
f46d08f
Add access control check for Procedures
Feb 24, 2020
2cc682a
Fix method name
findepi Mar 6, 2020
0f147a7
Add logging in CassandraSplitManager
findepi Mar 6, 2020
8a58219
Compile also with Java 13 on CI
findepi Mar 6, 2020
78a99d9
Add support for VARBINARY in H2 query runner
findepi Feb 17, 2020
4ea3d95
Remove external location constraint from file metastore
findepi Feb 17, 2020
8c3fa1a
Add column pruning test
findepi Feb 17, 2020
adc61db
Declare field as a List
findepi Feb 17, 2020
f0800cd
Simplify code
findepi Feb 17, 2020
9005550
Verify GroupField integrity
findepi Feb 17, 2020
773f77e
Implement Field#toString
findepi Feb 17, 2020
4374841
Cache AWS Current Region Result
pettyjamesm Nov 13, 2019
2f4f70d
Lowercase names passed to *RoutineName
kokosing Mar 6, 2020
e8e0d5e
Update to Airlift 0.193
electrum Mar 4, 2020
8a63f26
Increase default HTTP request/response header size
electrum Feb 3, 2020
d1f86a1
Allow specifying rounding mode when encoding decimals
electrum Mar 2, 2020
34312cd
Make randomTableSuffix utility method public
electrum Mar 2, 2020
2206f32
Support Intelligent-Tiering S3 storage class
arpit1997 Mar 7, 2020
c97858a
Introduce TransactionalMetadataFactory
kokosing Feb 25, 2020
5b0818a
Add support for ALTER SCHEMA SET AUTHORIZATION
lhofhansl Jan 29, 2020
20feda1
Filter out Hive information_schema and sys
findepi Sep 13, 2019
f01b3b4
Run storage_formats for kerberized HDFS without impersonation
findepi Mar 9, 2020
2349eb0
Consider user groups in resource groups selections
MiguelWeezardo Nov 26, 2019
aa65913
Add test for InterfaceTestUtils
findepi Mar 9, 2020
0cd46d8
Do not expect static methods to be overriden
findepi Mar 7, 2020
6834b2f
Fix overly strict inheritance check
findepi Mar 7, 2020
a0c63ab
Allow testing abstract class overrides
findepi Mar 9, 2020
65d4633
bump aws-sdk to 1.11.697
tooptoop4 Mar 5, 2020
3698fda
Restore ConnectorIdentity constructor parameter checks
MiguelWeezardo Mar 10, 2020
83aaec1
Remove unused transaction handle parameter
kokosing Mar 9, 2020
bbeac7f
Test Hive connector without SOCKS
findepi Mar 10, 2020
4f22b0e
Add default value of Glue GetPartitions MaxResults
Mar 11, 2020
a6c8485
Move readConfiguration to ConfigurationUtils
kokosing Mar 10, 2020
e385f04
Pass ConnectorSession to getSplits in JDBC connector
wendigo Mar 10, 2020
39a4abe
Increase amount of available memory for tests
findepi Mar 10, 2020
93cb89a
Fix package name
findepi Mar 10, 2020
e3d2bf5
Code cleanup
findepi Mar 10, 2020
915627e
Inline access to guarded field
findepi Mar 5, 2020
1dca7e9
Introduce HiveMetastoreDecorator
kokosing Mar 11, 2020
18e9943
Reuse assertEventually
findepi Mar 11, 2020
70e88f2
Assert eventually in TestTransactionManager.testExpiration
findepi Mar 11, 2020
40b0a06
Allow to set custom Hive module in tests
kokosing Mar 11, 2020
52a53a8
Flush remaining output pages before requesting next input page
sopel39 Mar 10, 2020
63bcde1
Don't do partition filter check on analyze call
Mar 11, 2020
5070240
Add --timezone CLI Argument
Mar 2, 2020
91d96a5
Remove extra variable prefixes and suffixes
kokosing Mar 11, 2020
f8b331e
Precompute probe row size estimate once
sopel39 Mar 10, 2020
c52f59f
Limit memory used by product tests' tests
findepi Mar 12, 2020
8f84694
Allocate as little memory as necessary for product tests framework
findepi Mar 12, 2020
58758f7
Remove unused field
findepi Mar 12, 2020
34d8f1d
Cleanup JDBC shading rules
electrum Mar 7, 2020
04952e6
Remove unused validation messages from JDBC shading
electrum Mar 7, 2020
f3744f2
Improve StreamingAggregationOperator invariant comment
sopel39 Mar 12, 2020
9c00a22
Use compute() to merge the operator stats into the operator summary
Mar 12, 2020
b353e10
use Map merge()
Mar 13, 2020
4fd44f8
Reuse dataSizeProperty, durationProperty
findepi Mar 12, 2020
b6ebba8
Remove redundant comma in Kudu document
ebyhr Mar 12, 2020
09edc58
Fix UI redirect with proxy
dain Feb 29, 2020
5ff97d2
Document filter for aggregate functions
mosabua Feb 21, 2020
557bac8
Add JMX monitoring documentation
mosabua Feb 18, 2020
36409c8
Add note about tmp folder config for jvm
mosabua Feb 14, 2020
819fd8b
Doc hidden columns and properties table
mosabua Feb 13, 2020
7f24aec
Enable Predicate pushdown for non Timstamp/Time column
Praveen2112 Feb 7, 2020
1660f99
Refactor DomainConverter
Praveen2112 Mar 1, 2020
298bf87
Push compacted TupleDomain for Iceberg filter
Praveen2112 Mar 1, 2020
de2e0ca
Fix predicate pushdown for columns of type Timestamp/Time
Praveen2112 Mar 1, 2020
fec8ce4
Doc workaround for higher sphinx version usage
mosabua Feb 12, 2020
14fc860
Doc for writer scaling
mosabua Feb 14, 2020
37d3068
Hive 3 support documentation
findepi Oct 11, 2019
5bad95f
Do not create empty schemas in HiveQueryRunner
kokosing Mar 12, 2020
4dd1d6e
Reduce maven HTTP timeout
findepi Mar 15, 2020
21e18f0
Increase retry time limit for maven
findepi Mar 15, 2020
60e0c9d
Start MongoQueryRunner on 8080
findepi Mar 14, 2020
a225545
Fix types in test
findepi Mar 14, 2020
e7b3186
Fix Mongo type mappings
findepi Mar 14, 2020
ce11bec
Disable incorrect pushdown in Mongo connector
findepi Mar 14, 2020
92b118f
Add generic data type & pushdown test
findepi Mar 14, 2020
6a47833
Add 331 release notes
martint Feb 26, 2020
1795b11
Provide Type to getColumnMasks
martint Mar 16, 2020
5ee56cf
[maven-release-plugin] prepare release 331
martint Mar 16, 2020
3638dfe
[maven-release-plugin] prepare for next development iteration
martint Mar 16, 2020
b11397f
Support column reordering in cross join
kasiafi Jan 30, 2020
7dd3d18
Use catalog name in S3 statistics name
MiguelWeezardo Mar 16, 2020
42b234e
Rename testViewAccessControl to emphasis focus on columns
kokosing Mar 9, 2020
f618b3a
Match error message with access control check method
kokosing Mar 9, 2020
05cd9d1
Use static import
kokosing Mar 9, 2020
44a3689
Improve robustness of InitializeSystemAccessControl
kokosing Mar 9, 2020
6855c46
Move createExpression to ExpressionTestUtils
kokosing Mar 9, 2020
a0f113c
Wrap expression test utilities with transaction
kokosing Mar 9, 2020
0039dd8
Add execute function access control
kokosing Mar 9, 2020
cb9673a
Make assertion to be connector agnostic
kokosing Mar 17, 2020
e70e47d
Remove unused parserOptions parameter
kokosing Mar 16, 2020
33853c6
Fallback to delegate for masks and filters
kokosing Mar 16, 2020
c5012cc
Introduce builder for TestingPrestoServer
kokosing Mar 16, 2020
6c51143
Allow to set custom system access control for tests
kokosing Mar 16, 2020
a4d3c21
Use Optional for nullable variables
kokosing Mar 16, 2020
0b09aef
Allow to pass additional modules to DistributedQueryRunner
kokosing Mar 16, 2020
d06e9dd
Fix formatting
kokosing Mar 16, 2020
7382569
Quiet fast install mvn
kokosing Mar 17, 2020
c15d00e
Refactor memory connector
Lewuathe Mar 17, 2020
c37226b
Add default access-control.properties file
lukasz-walkiewicz Mar 17, 2020
27092e3
Allow public access to ParquetPageSourceFactory#createParquetPageSource
alexjo2144 Mar 17, 2020
1fadfa7
Get timestamp from ObjectId
findepi Mar 17, 2020
3895d6f
Simplify function declaration
findepi Mar 17, 2020
646ed33
Extract hdp3 tests to separate suite
kokosing Mar 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Introduce TransactionalMetadataFactory
  • Loading branch information
kokosing committed Mar 9, 2020
commit c97858a55d3f56a69fe2cf9f7062bdfa60b211a4
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.function.Supplier;

import static com.google.common.base.Preconditions.checkArgument;
import static io.prestosql.spi.transaction.IsolationLevel.READ_UNCOMMITTED;
Expand All @@ -46,7 +45,7 @@ public class HiveConnector
implements Connector
{
private final LifeCycleManager lifeCycleManager;
private final Supplier<TransactionalMetadata> metadataFactory;
private final TransactionalMetadataFactory metadataFactory;
private final ConnectorSplitManager splitManager;
private final ConnectorPageSourceProvider pageSourceProvider;
private final ConnectorPageSinkProvider pageSinkProvider;
Expand All @@ -65,7 +64,7 @@ public class HiveConnector

public HiveConnector(
LifeCycleManager lifeCycleManager,
Supplier<TransactionalMetadata> metadataFactory,
TransactionalMetadataFactory metadataFactory,
HiveTransactionManager transactionManager,
ConnectorSplitManager splitManager,
ConnectorPageSourceProvider pageSourceProvider,
Expand Down Expand Up @@ -189,7 +188,7 @@ public ConnectorTransactionHandle beginTransaction(IsolationLevel isolationLevel
checkConnectorSupports(READ_UNCOMMITTED, isolationLevel);
ConnectorTransactionHandle transaction = new HiveTransactionHandle();
try (ThreadContextClassLoader ignored = new ThreadContextClassLoader(classLoader)) {
transactionManager.put(transaction, metadataFactory.get());
transactionManager.put(transaction, metadataFactory.create(transaction));
}
return transaction;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore;
import io.prestosql.plugin.hive.security.AccessControlMetadataFactory;
import io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider;
import io.prestosql.spi.connector.ConnectorTransactionHandle;
import io.prestosql.spi.type.TypeManager;
import org.joda.time.DateTimeZone;

Expand All @@ -29,13 +30,12 @@
import java.util.Optional;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ScheduledExecutorService;
import java.util.function.Supplier;

import static io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.memoizeMetastore;
import static java.util.Objects.requireNonNull;

public class HiveMetadataFactory
implements Supplier<TransactionalMetadata>
implements TransactionalMetadataFactory
{
private static final Logger log = Logger.get(HiveMetadataFactory.class);

Expand Down Expand Up @@ -164,7 +164,7 @@ public HiveMetadataFactory(
}

@Override
public HiveMetadata get()
public TransactionalMetadata create(ConnectorTransactionHandle transactionHandle)
{
SemiTransactionalHiveMetastore metastore = new SemiTransactionalHiveMetastore(
hdfsEnvironment,
Expand Down Expand Up @@ -192,6 +192,6 @@ public HiveMetadata get()
typeTranslator,
prestoVersion,
new MetastoreHiveStatisticsProvider(metastore),
accessControlMetadataFactory.create(metastore));
accessControlMetadataFactory.create(transactionHandle, metastore));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
import com.google.inject.Module;
import com.google.inject.Provides;
import com.google.inject.Scopes;
import com.google.inject.TypeLiteral;
import com.google.inject.multibindings.Multibinder;
import io.airlift.event.client.EventClient;
import io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore;
Expand Down Expand Up @@ -48,7 +47,6 @@
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ScheduledExecutorService;
import java.util.function.Function;
import java.util.function.Supplier;

import static com.google.inject.multibindings.Multibinder.newSetBinder;
import static io.airlift.concurrent.Threads.daemonThreadsNamed;
Expand Down Expand Up @@ -90,7 +88,7 @@ public void configure(Binder binder)
binder.bind(HivePartitionManager.class).in(Scopes.SINGLETON);
binder.bind(LocationService.class).to(HiveLocationService.class).in(Scopes.SINGLETON);
binder.bind(HiveMetadataFactory.class).in(Scopes.SINGLETON);
binder.bind(new TypeLiteral<Supplier<TransactionalMetadata>>() {}).to(HiveMetadataFactory.class).in(Scopes.SINGLETON);
binder.bind(TransactionalMetadataFactory.class).to(HiveMetadataFactory.class).in(Scopes.SINGLETON);
binder.bind(HiveTransactionManager.class).in(Scopes.SINGLETON);
binder.bind(ConnectorSplitManager.class).to(HiveSplitManager.class).in(Scopes.SINGLETON);
newExporter(binder).export(ConnectorSplitManager.class).as(generator -> generator.generatedNameOf(HiveSplitManager.class));
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.prestosql.plugin.hive;

import io.prestosql.spi.connector.ConnectorTransactionHandle;

public interface TransactionalMetadataFactory
{
TransactionalMetadata create(ConnectorTransactionHandle transactionHandle);
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,13 @@
import io.prestosql.plugin.hive.HiveInsertTableHandle;
import io.prestosql.plugin.hive.HiveMetastoreClosure;
import io.prestosql.plugin.hive.HiveTableHandle;
import io.prestosql.plugin.hive.HiveTransactionHandle;
import io.prestosql.plugin.hive.LocationService;
import io.prestosql.plugin.hive.LocationService.WriteInfo;
import io.prestosql.plugin.hive.PartitionUpdate;
import io.prestosql.plugin.hive.PartitionUpdate.UpdateMode;
import io.prestosql.plugin.hive.TransactionalMetadata;
import io.prestosql.plugin.hive.TransactionalMetadataFactory;
import io.prestosql.plugin.hive.authentication.HiveIdentity;
import io.prestosql.plugin.hive.metastore.HiveMetastore;
import io.prestosql.spi.PrestoException;
Expand All @@ -44,7 +46,6 @@
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.function.Supplier;

import static com.google.common.collect.ImmutableList.toImmutableList;
import static io.prestosql.spi.StandardErrorCode.ALREADY_EXISTS;
Expand All @@ -66,13 +67,13 @@ public class CreateEmptyPartitionProcedure
List.class,
List.class);

private final Supplier<TransactionalMetadata> hiveMetadataFactory;
private final TransactionalMetadataFactory hiveMetadataFactory;
private final HiveMetastoreClosure metastore;
private final LocationService locationService;
private final JsonCodec<PartitionUpdate> partitionUpdateJsonCodec;

@Inject
public CreateEmptyPartitionProcedure(Supplier<TransactionalMetadata> hiveMetadataFactory, HiveMetastore metastore, LocationService locationService, JsonCodec<PartitionUpdate> partitionUpdateCodec)
public CreateEmptyPartitionProcedure(TransactionalMetadataFactory hiveMetadataFactory, HiveMetastore metastore, LocationService locationService, JsonCodec<PartitionUpdate> partitionUpdateCodec)
{
this.hiveMetadataFactory = requireNonNull(hiveMetadataFactory, "hiveMetadataFactory is null");
this.metastore = new HiveMetastoreClosure(requireNonNull(metastore, "metastore is null"));
Expand Down Expand Up @@ -103,7 +104,7 @@ public void createEmptyPartition(ConnectorSession session, String schema, String

private void doCreateEmptyPartition(ConnectorSession session, String schemaName, String tableName, List<String> partitionColumnNames, List<String> partitionValues)
{
TransactionalMetadata hiveMetadata = hiveMetadataFactory.get();
TransactionalMetadata hiveMetadata = hiveMetadataFactory.create(new HiveTransactionHandle());
HiveTableHandle tableHandle = (HiveTableHandle) hiveMetadata.getTableHandle(session, new SchemaTableName(schemaName, tableName));
if (tableHandle == null) {
throw new PrestoException(INVALID_PROCEDURE_ARGUMENT, format("Table %s does not exist", new SchemaTableName(schemaName, tableName)));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,10 @@
import io.prestosql.plugin.hive.HiveColumnHandle;
import io.prestosql.plugin.hive.HiveMetastoreClosure;
import io.prestosql.plugin.hive.HiveTableHandle;
import io.prestosql.plugin.hive.HiveTransactionHandle;
import io.prestosql.plugin.hive.PartitionStatistics;
import io.prestosql.plugin.hive.TransactionalMetadata;
import io.prestosql.plugin.hive.TransactionalMetadataFactory;
import io.prestosql.plugin.hive.authentication.HiveIdentity;
import io.prestosql.plugin.hive.metastore.HiveMetastore;
import io.prestosql.spi.PrestoException;
Expand All @@ -36,7 +38,6 @@
import java.lang.invoke.MethodHandle;
import java.util.List;
import java.util.Map;
import java.util.function.Supplier;

import static com.google.common.collect.ImmutableList.toImmutableList;
import static io.prestosql.spi.StandardErrorCode.INVALID_PROCEDURE_ARGUMENT;
Expand All @@ -62,11 +63,11 @@ public class DropStatsProcedure
String.class,
List.class);

private final Supplier<TransactionalMetadata> hiveMetadataFactory;
private final TransactionalMetadataFactory hiveMetadataFactory;
private final HiveMetastoreClosure metastore;

@Inject
public DropStatsProcedure(Supplier<TransactionalMetadata> hiveMetadataFactory, HiveMetastore metastore)
public DropStatsProcedure(TransactionalMetadataFactory hiveMetadataFactory, HiveMetastore metastore)
{
this.hiveMetadataFactory = requireNonNull(hiveMetadataFactory, "hiveMetadataFactory is null");
this.metastore = new HiveMetastoreClosure(requireNonNull(metastore, "metastore is null"));
Expand Down Expand Up @@ -94,7 +95,7 @@ public void dropStats(ConnectorSession session, String schema, String table, Lis

private void doDropStats(ConnectorSession session, String schema, String table, List<?> partitionValues)
{
TransactionalMetadata hiveMetadata = hiveMetadataFactory.get();
TransactionalMetadata hiveMetadata = hiveMetadataFactory.create(new HiveTransactionHandle());
HiveTableHandle handle = (HiveTableHandle) hiveMetadata.getTableHandle(session, new SchemaTableName(schema, table));
if (handle == null) {
throw new PrestoException(INVALID_PROCEDURE_ARGUMENT, format("Table %s does not exist", new SchemaTableName(schema, table)));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@
import io.prestosql.plugin.hive.HiveConfig;
import io.prestosql.plugin.hive.HiveMetadata;
import io.prestosql.plugin.hive.HiveMetastoreClosure;
import io.prestosql.plugin.hive.HiveTransactionHandle;
import io.prestosql.plugin.hive.PartitionStatistics;
import io.prestosql.plugin.hive.TransactionalMetadata;
import io.prestosql.plugin.hive.TransactionalMetadataFactory;
import io.prestosql.plugin.hive.authentication.HiveIdentity;
import io.prestosql.plugin.hive.metastore.HiveMetastore;
import io.prestosql.plugin.hive.metastore.Partition;
Expand All @@ -44,7 +45,6 @@
import java.lang.invoke.MethodHandle;
import java.util.List;
import java.util.Optional;
import java.util.function.Supplier;

import static io.prestosql.plugin.hive.HiveMetadata.PRESTO_QUERY_ID_NAME;
import static io.prestosql.plugin.hive.procedure.Procedures.checkIsPartitionedTable;
Expand All @@ -71,12 +71,12 @@ public class RegisterPartitionProcedure
String.class);

private final boolean allowRegisterPartition;
private final Supplier<TransactionalMetadata> hiveMetadataFactory;
private final TransactionalMetadataFactory hiveMetadataFactory;
private final HdfsEnvironment hdfsEnvironment;
private final HiveMetastoreClosure metastore;

@Inject
public RegisterPartitionProcedure(HiveConfig hiveConfig, Supplier<TransactionalMetadata> hiveMetadataFactory, HiveMetastore metastore, HdfsEnvironment hdfsEnvironment)
public RegisterPartitionProcedure(HiveConfig hiveConfig, TransactionalMetadataFactory hiveMetadataFactory, HiveMetastore metastore, HdfsEnvironment hdfsEnvironment)
{
this.allowRegisterPartition = requireNonNull(hiveConfig, "hiveConfig is null").isAllowRegisterPartition();
this.hiveMetadataFactory = requireNonNull(hiveMetadataFactory, "hiveMetadataFactory is null");
Expand Down Expand Up @@ -134,7 +134,7 @@ private void doRegisterPartition(ConnectorSession session, String schemaName, St
throw new PrestoException(INVALID_PROCEDURE_ARGUMENT, "Partition location does not exist: " + partitionLocation);
}

SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.get()).getMetastore();
SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.create(new HiveTransactionHandle())).getMetastore();

metastore.addPartition(
session,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@
import com.google.common.collect.Sets;
import io.prestosql.plugin.hive.HdfsEnvironment;
import io.prestosql.plugin.hive.HiveMetadata;
import io.prestosql.plugin.hive.HiveTransactionHandle;
import io.prestosql.plugin.hive.PartitionStatistics;
import io.prestosql.plugin.hive.TransactionalMetadata;
import io.prestosql.plugin.hive.TransactionalMetadataFactory;
import io.prestosql.plugin.hive.authentication.HiveIdentity;
import io.prestosql.plugin.hive.metastore.Column;
import io.prestosql.plugin.hive.metastore.Partition;
Expand All @@ -44,7 +45,6 @@
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.function.Supplier;
import java.util.stream.Stream;

import static com.google.common.collect.ImmutableList.toImmutableList;
Expand Down Expand Up @@ -74,12 +74,12 @@ public enum SyncMode
String.class,
String.class);

private final Supplier<TransactionalMetadata> hiveMetadataFactory;
private final TransactionalMetadataFactory hiveMetadataFactory;
private final HdfsEnvironment hdfsEnvironment;

@Inject
public SyncPartitionMetadataProcedure(
Supplier<TransactionalMetadata> hiveMetadataFactory,
TransactionalMetadataFactory hiveMetadataFactory,
HdfsEnvironment hdfsEnvironment)
{
this.hiveMetadataFactory = requireNonNull(hiveMetadataFactory, "hiveMetadataFactory is null");
Expand Down Expand Up @@ -111,7 +111,7 @@ private void doSyncPartitionMetadata(ConnectorSession session, String schemaName
SyncMode syncMode = toSyncMode(mode);
HdfsContext hdfsContext = new HdfsContext(session, schemaName, tableName);
HiveIdentity identity = new HiveIdentity(session);
SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.get()).getMetastore();
SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.create(new HiveTransactionHandle())).getMetastore();
SchemaTableName schemaTableName = new SchemaTableName(schemaName, tableName);

Table table = metastore.getTable(identity, schemaName, tableName)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@
import com.google.common.collect.ImmutableList;
import io.prestosql.plugin.hive.HiveMetadata;
import io.prestosql.plugin.hive.HiveMetastoreClosure;
import io.prestosql.plugin.hive.TransactionalMetadata;
import io.prestosql.plugin.hive.HiveTransactionHandle;
import io.prestosql.plugin.hive.TransactionalMetadataFactory;
import io.prestosql.plugin.hive.authentication.HiveIdentity;
import io.prestosql.plugin.hive.metastore.HiveMetastore;
import io.prestosql.plugin.hive.metastore.Partition;
Expand All @@ -36,7 +37,6 @@

import java.lang.invoke.MethodHandle;
import java.util.List;
import java.util.function.Supplier;

import static io.prestosql.plugin.hive.procedure.Procedures.checkIsPartitionedTable;
import static io.prestosql.plugin.hive.procedure.Procedures.checkPartitionColumns;
Expand All @@ -58,11 +58,11 @@ public class UnregisterPartitionProcedure
List.class,
List.class);

private final Supplier<TransactionalMetadata> hiveMetadataFactory;
private final TransactionalMetadataFactory hiveMetadataFactory;
private final HiveMetastoreClosure metastore;

@Inject
public UnregisterPartitionProcedure(Supplier<TransactionalMetadata> hiveMetadataFactory, HiveMetastore metastore)
public UnregisterPartitionProcedure(TransactionalMetadataFactory hiveMetadataFactory, HiveMetastore metastore)
{
this.hiveMetadataFactory = requireNonNull(hiveMetadataFactory, "hiveMetadataFactory is null");
this.metastore = new HiveMetastoreClosure(requireNonNull(metastore, "metastore is null"));
Expand Down Expand Up @@ -105,7 +105,7 @@ private void doUnregisterPartition(ConnectorSession session, String schemaName,
Partition partition = metastore.getPartition(new HiveIdentity(session), schemaName, tableName, partitionValues)
.orElseThrow(() -> new PrestoException(NOT_FOUND, format("Partition %s does not exist", partitionName)));

SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.get()).getMetastore();
SemiTransactionalHiveMetastore metastore = ((HiveMetadata) hiveMetadataFactory.create(new HiveTransactionHandle())).getMetastore();

metastore.dropPartition(
session,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@
package io.prestosql.plugin.hive.security;

import io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore;
import io.prestosql.spi.connector.ConnectorTransactionHandle;

public interface AccessControlMetadataFactory
{
AccessControlMetadata create(SemiTransactionalHiveMetastore metastore);
AccessControlMetadata create(ConnectorTransactionHandle transactionHandle, SemiTransactionalHiveMetastore metastore);
}
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ private static class StaticAccessControlMetadataModule
@Override
public void configure(Binder binder)
{
binder.bind(AccessControlMetadataFactory.class).toInstance(metastore -> new AccessControlMetadata() {});
binder.bind(AccessControlMetadataFactory.class).toInstance((transactionHandle, metastore) -> new AccessControlMetadata() {});
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
import com.google.inject.Scopes;
import io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore;
import io.prestosql.spi.connector.ConnectorAccessControl;
import io.prestosql.spi.connector.ConnectorTransactionHandle;

public class SqlStandardSecurityModule
implements Module
Expand All @@ -35,7 +36,7 @@ private static final class SqlStandardAccessControlMetadataFactory
public SqlStandardAccessControlMetadataFactory() {}

@Override
public AccessControlMetadata create(SemiTransactionalHiveMetastore metastore)
public AccessControlMetadata create(ConnectorTransactionHandle transactionHandle, SemiTransactionalHiveMetastore metastore)
{
return new SqlStandardAccessControlMetadata(metastore);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -759,7 +759,7 @@ protected final void setup(String databaseName, HiveConfig hiveConfig, HiveMetas
heartbeatService,
new HiveTypeTranslator(),
TEST_SERVER_VERSION,
SqlStandardAccessControlMetadata::new);
(transactionHandle, metastore) -> new SqlStandardAccessControlMetadata(metastore));
transactionManager = new HiveTransactionManager();
splitManager = new HiveSplitManager(
transactionHandle -> ((HiveMetadata) transactionManager.get(transactionHandle)).getMetastore(),
Expand Down Expand Up @@ -838,7 +838,7 @@ protected ConnectorSession newSession(Map<String, Object> propertyValues)

protected Transaction newTransaction()
{
return new HiveTransaction(transactionManager, metadataFactory.get());
return new HiveTransaction(transactionManager, (HiveMetadata) metadataFactory.create(new HiveTransactionHandle()));
}

interface Transaction
Expand Down
Loading