-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[SPARK-30312][SQL] Preserve path permission and acl when truncate table #26956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
73e1bb0
60134c1
1df437b
2c3f1fb
39fe234
7b29f2f
ab69638
00a2bc8
8e429a3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -24,6 +24,7 @@ import scala.util.Try | |
import scala.util.control.NonFatal | ||
|
||
import org.apache.hadoop.fs.{FileContext, FsConstants, Path} | ||
import org.apache.hadoop.fs.permission.{AclEntry, FsPermission} | ||
|
||
import org.apache.spark.sql.{AnalysisException, Row, SparkSession} | ||
import org.apache.spark.sql.catalyst.TableIdentifier | ||
|
@@ -494,13 +495,59 @@ case class TruncateTableCommand( | |
partLocations | ||
} | ||
val hadoopConf = spark.sessionState.newHadoopConf() | ||
val ignorePermissionAcl = SQLConf.get.truncateTableIgnorePermissionAcl | ||
locations.foreach { location => | ||
if (location.isDefined) { | ||
val path = new Path(location.get) | ||
try { | ||
val fs = path.getFileSystem(hadoopConf) | ||
|
||
// Not all fs impl. support these APIs. | ||
var optPermission: Option[FsPermission] = None | ||
var optAcls: Option[java.util.List[AclEntry]] = None | ||
if (!ignorePermissionAcl) { | ||
val fileStatus = fs.getFileStatus(path) | ||
try { | ||
optPermission = Some(fileStatus.getPermission()) | ||
} catch { | ||
case NonFatal(_) => // do nothing | ||
} | ||
|
||
try { | ||
optAcls = Some(fs.getAclStatus(path).getEntries) | ||
} catch { | ||
case NonFatal(_) => // do nothing | ||
} | ||
} | ||
|
||
fs.delete(path, true) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not familiar with Hadoop FS APIs, but do we have something like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the FileSystem API, I didn't find one like that. But I just search again, there is FileUtil API that seems working like that. I will test tomorrow to see if it work well to keep permission/acl in a Spark cluster like current approach. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh, I think There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So I think there is no single API to do We can do There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you know how Hive/Presto implement TRUNCATE TABLE? Are there other file attributes we need to retain? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For Hive, I try to trace the code path for TRUNCATE TABLE:
Get the locations of table/partitions and call Warehouse.deleteDir for each.
In There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Now I retain permission and ACLs. I was doing In Hive code, since it is code running the metastore server, I think it is running with enough permission to set owner/group. For Spark, we may not running it with super user. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In Presto, looks like it takes the approach by listing all files/directories under the path to delete (i.e., There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the presto way seems safer to me. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was concerned with the presto way on performance regression. For a table which has many files/directories, deleting them one by one could be a bottleneck. If we add a config for retaining permission/ACLs when truncating table? Users can choose to disable it and directly delete top directory (current behavior) without retaining permission/ACLs. |
||
|
||
// We should keep original permission/acl of the path. | ||
// For owner/group, only super-user can set it, for example on HDFS. Because | ||
// current user can delete the path, we assume the user/group is correct or not an issue. | ||
fs.mkdirs(path) | ||
if (!ignorePermissionAcl) { | ||
optPermission.foreach { permission => | ||
try { | ||
fs.setPermission(path, permission) | ||
} catch { | ||
case NonFatal(e) => | ||
throw new SecurityException( | ||
s"Failed to set original permission $permission back to " + | ||
s"the created path: $path. Exception: ${e.getMessage}") | ||
} | ||
} | ||
optAcls.foreach { acls => | ||
try { | ||
fs.setAcl(path, acls) | ||
} catch { | ||
case NonFatal(e) => | ||
throw new SecurityException( | ||
s"Failed to set original ACL $acls back to " + | ||
s"the created path: $path. Exception: ${e.getMessage}") | ||
} | ||
} | ||
} | ||
} catch { | ||
case NonFatal(e) => | ||
throw new AnalysisException( | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,7 +21,8 @@ import java.io.{File, PrintWriter} | |
import java.net.URI | ||
import java.util.Locale | ||
|
||
import org.apache.hadoop.fs.Path | ||
import org.apache.hadoop.fs.{Path, RawLocalFileSystem} | ||
import org.apache.hadoop.fs.permission.{AclEntry, AclEntryScope, AclEntryType, AclStatus, FsAction, FsPermission} | ||
|
||
import org.apache.spark.internal.config | ||
import org.apache.spark.internal.config.RDD_PARALLEL_LISTING_THRESHOLD | ||
|
@@ -1981,6 +1982,60 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils { | |
} | ||
} | ||
|
||
test("SPARK-30312: truncate table - keep acl/permission") { | ||
import testImplicits._ | ||
val ignorePermissionAcl = Seq(true, false) | ||
|
||
ignorePermissionAcl.foreach { ignore => | ||
withSQLConf( | ||
"fs.file.impl" -> classOf[FakeLocalFsFileSystem].getName, | ||
"fs.file.impl.disable.cache" -> "true", | ||
SQLConf.TRUNCATE_TABLE_IGNORE_PERMISSION_ACL.key -> ignore.toString) { | ||
withTable("tab1") { | ||
sql("CREATE TABLE tab1 (col INT) USING parquet") | ||
sql("INSERT INTO tab1 SELECT 1") | ||
checkAnswer(spark.table("tab1"), Row(1)) | ||
|
||
val tablePath = new Path(spark.sessionState.catalog | ||
.getTableMetadata(TableIdentifier("tab1")).storage.locationUri.get) | ||
|
||
val hadoopConf = spark.sessionState.newHadoopConf() | ||
val fs = tablePath.getFileSystem(hadoopConf) | ||
val fileStatus = fs.getFileStatus(tablePath); | ||
|
||
fs.setPermission(tablePath, new FsPermission("777")) | ||
assert(fileStatus.getPermission().toString() == "rwxrwxrwx") | ||
|
||
// Set ACL to table path. | ||
val customAcl = new java.util.ArrayList[AclEntry]() | ||
customAcl.add(new AclEntry.Builder() | ||
.setType(AclEntryType.USER) | ||
.setScope(AclEntryScope.ACCESS) | ||
.setPermission(FsAction.READ).build()) | ||
fs.setAcl(tablePath, customAcl) | ||
assert(fs.getAclStatus(tablePath).getEntries().get(0) == customAcl.get(0)) | ||
|
||
sql("TRUNCATE TABLE tab1") | ||
assert(spark.table("tab1").collect().isEmpty) | ||
|
||
val fileStatus2 = fs.getFileStatus(tablePath) | ||
if (ignore) { | ||
assert(fileStatus2.getPermission().toString() == "rwxr-xr-x") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for updating, @viirya . This test case may fail on the system where the default umask is not 0022. Can we have a more robust way?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point! Let me update it. |
||
} else { | ||
assert(fileStatus2.getPermission().toString() == "rwxrwxrwx") | ||
} | ||
val aclEntries = fs.getAclStatus(tablePath).getEntries() | ||
if (ignore) { | ||
assert(aclEntries.size() == 0) | ||
} else { | ||
assert(aclEntries.size() == 1) | ||
assert(aclEntries.get(0) == customAcl.get(0)) | ||
} | ||
} | ||
} | ||
} | ||
} | ||
|
||
test("create temporary view with mismatched schema") { | ||
withTable("tab1") { | ||
spark.range(10).write.saveAsTable("tab1") | ||
|
@@ -2879,3 +2934,25 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils { | |
} | ||
} | ||
} | ||
|
||
object FakeLocalFsFileSystem { | ||
var aclStatus = new AclStatus.Builder().build() | ||
} | ||
|
||
// A fake test local filesystem used to test ACL. It keeps a ACL status. If deletes | ||
// a path of this filesystem, it will clean up the ACL status. Note that for test purpose, | ||
// it has only one ACL status for all paths. | ||
class FakeLocalFsFileSystem extends RawLocalFileSystem { | ||
import FakeLocalFsFileSystem._ | ||
|
||
override def delete(f: Path, recursive: Boolean): Boolean = { | ||
aclStatus = new AclStatus.Builder().build() | ||
super.delete(f, recursive) | ||
} | ||
|
||
override def getAclStatus(path: Path): AclStatus = aclStatus | ||
|
||
override def setAcl(path: Path, aclSpec: java.util.List[AclEntry]): Unit = { | ||
aclStatus = new AclStatus.Builder().addEntries(aclSpec).build() | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-> spark.sql.truncateTable.ignorePermissionAcl.enabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I will create a followup for this. Thanks.