Skip to content

HADOOP-13708. Fix a few typos in site *.md documents #140

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 17 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
2fb392a
HADOOP-13697. LogLevel#main should not throw exception if no argument…
liuml07 Oct 7, 2016
0b4e723
HADOOP-13708. Fix a few typos in site *.md documents
Oct 11, 2016
4b32b14
HADOOP-13684. Snappy may complain Hadoop is built without snappy if l…
jojochuang Oct 11, 2016
8a09bf7
HADOOP-13705. Revert HADOOP-13534 Remove unused TrashPolicy#getInstan…
umbrant Oct 11, 2016
dacd3ec
HDFS-10991. Export hdfsTruncateFile symbol in libhdfs. Contributed by…
umbrant Oct 11, 2016
61f0490
HDFS-10984. Expose nntop output as metrics. Contributed by Siddharth …
xiaoyuyao Oct 11, 2016
3c9a010
HDFS-10903. Replace config key literal strings with config key names …
liuml07 Oct 11, 2016
b84c489
HADOOP-13698. Document caveat for KeyShell when underlying KeyProvide…
xiao-chen Oct 12, 2016
7ba7092
HDFS-10965. Add unit test for HDFS command 'dfsadmin -printTopology'.…
liuml07 Oct 11, 2016
6378845
YARN-4464. Lower the default max applications stored in the RM and st…
Oct 12, 2016
6476934
YARN-5677. RM should transition to standby when connection is lost fo…
Oct 12, 2016
85cd06f
HDFS-10789. Route webhdfs through the RPC call queue. Contributed by …
kihwal Oct 12, 2016
12d739a
HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize…
umbrant Oct 12, 2016
da74f9b
HADOOP-13708. Fix a few typos in site *.md documents
Oct 11, 2016
bac5c90
Merge branch 'HADOOP-13708' of https://github.com/danix800/hadoop int…
Oct 13, 2016
ea41804
HADOOP-13708. Fix a few typos in site *.md documents
Oct 11, 2016
2f135c7
Merge branch 'HADOOP-13708' of https://github.com/danix800/hadoop int…
Oct 13, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -36,15 +36,25 @@ public abstract class TrashPolicy extends Configured {
protected Path trash; // path to trash directory
protected long deletionInterval; // deletion interval for Emptier

/**
* Used to setup the trash policy. Must be implemented by all TrashPolicy
* implementations.
* @param conf the configuration to be used
* @param fs the filesystem to be used
* @param home the home directory
* @deprecated Use {@link #initialize(Configuration, FileSystem)} instead.
*/
@Deprecated
public abstract void initialize(Configuration conf, FileSystem fs, Path home);

/**
* Used to setup the trash policy. Must be implemented by all TrashPolicy
* implementations. Different from initialize(conf, fs, home), this one does
* not assume trash always under /user/$USER due to HDFS encryption zone.
* @param conf the configuration to be used
* @param fs the filesystem to be used
* @throws IOException
*/
public void initialize(Configuration conf, FileSystem fs) throws IOException{
public void initialize(Configuration conf, FileSystem fs) {
throw new UnsupportedOperationException();
}

Expand Down Expand Up @@ -99,6 +109,25 @@ public Path getCurrentTrashDir(Path path) throws IOException {
*/
public abstract Runnable getEmptier() throws IOException;

/**
* Get an instance of the configured TrashPolicy based on the value
* of the configuration parameter fs.trash.classname.
*
* @param conf the configuration to be used
* @param fs the file system to be used
* @param home the home directory
* @return an instance of TrashPolicy
* @deprecated Use {@link #getInstance(Configuration, FileSystem)} instead.
*/
@Deprecated
public static TrashPolicy getInstance(Configuration conf, FileSystem fs, Path home) {
Class<? extends TrashPolicy> trashClass = conf.getClass(
"fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);
trash.initialize(conf, fs, home); // initialize TrashPolicy
return trash;
}

/**
* Get an instance of the configured TrashPolicy based on the value
* of the configuration parameter fs.trash.classname.
Expand All @@ -107,8 +136,7 @@ public Path getCurrentTrashDir(Path path) throws IOException {
* @param fs the file system to be used
* @return an instance of TrashPolicy
*/
public static TrashPolicy getInstance(Configuration conf, FileSystem fs)
throws IOException {
public static TrashPolicy getInstance(Configuration conf, FileSystem fs) {
Class<? extends TrashPolicy> trashClass = conf.getClass(
"fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,21 @@ private TrashPolicyDefault(FileSystem fs, Configuration conf)
initialize(conf, fs);
}

/**
* @deprecated Use {@link #initialize(Configuration, FileSystem)} instead.
*/
@Override
@Deprecated
public void initialize(Configuration conf, FileSystem fs, Path home) {
this.fs = fs;
this.deletionInterval = (long)(conf.getFloat(
FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT)
* MSECS_PER_MINUTE);
this.emptierInterval = (long)(conf.getFloat(
FS_TRASH_CHECKPOINT_INTERVAL_KEY, FS_TRASH_CHECKPOINT_INTERVAL_DEFAULT)
* MSECS_PER_MINUTE);
}

@Override
public void initialize(Configuration conf, FileSystem fs) {
this.fs = fs;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,20 +60,22 @@ public Configuration getConf() {
* Are the native snappy libraries loaded & initialized?
*/
public static void checkNativeCodeLoaded() {
if (!NativeCodeLoader.isNativeCodeLoaded() ||
!NativeCodeLoader.buildSupportsSnappy()) {
throw new RuntimeException("native snappy library not available: " +
"this version of libhadoop was built without " +
"snappy support.");
}
if (!SnappyCompressor.isNativeCodeLoaded()) {
throw new RuntimeException("native snappy library not available: " +
"SnappyCompressor has not been loaded.");
}
if (!SnappyDecompressor.isNativeCodeLoaded()) {
throw new RuntimeException("native snappy library not available: " +
"SnappyDecompressor has not been loaded.");
}
if (!NativeCodeLoader.buildSupportsSnappy()) {
throw new RuntimeException("native snappy library not available: " +
"this version of libhadoop was built without " +
"snappy support.");
}
if (!NativeCodeLoader.isNativeCodeLoaded()) {
throw new RuntimeException("Failed to load libhadoop.");
}
if (!SnappyCompressor.isNativeCodeLoaded()) {
throw new RuntimeException("native snappy library not available: " +
"SnappyCompressor has not been loaded.");
}
if (!SnappyDecompressor.isNativeCodeLoaded()) {
throw new RuntimeException("native snappy library not available: " +
"SnappyDecompressor has not been loaded.");
}
}

public static boolean isNativeCodeLoaded() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@

import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.atomic.AtomicBoolean;

import org.apache.hadoop.ipc.Server.Call;
Expand All @@ -37,14 +38,10 @@ public ExternalCall(PrivilegedExceptionAction<T> action) {

public abstract UserGroupInformation getRemoteUser();

public final T get() throws IOException, InterruptedException {
public final T get() throws InterruptedException, ExecutionException {
waitForCompletion();
if (error != null) {
if (error instanceof IOException) {
throw (IOException)error;
} else {
throw new IOException(error);
}
throw new ExecutionException(error);
}
return result;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,17 @@
import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
import org.apache.hadoop.security.ssl.SSLFactory;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.ServletUtil;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

/**
* Change log level in runtime.
*/
@InterfaceStability.Evolving
public class LogLevel {
public static final String USAGES = "\nUsage: General options are:\n"
public static final String USAGES = "\nUsage: Command options are:\n"
+ "\t[-getlevel <host:port> <classname> [-protocol (http|https)]\n"
+ "\t[-setlevel <host:port> <classname> <level> "
+ "[-protocol (http|https)]\n";
Expand All @@ -67,7 +69,7 @@ public class LogLevel {
*/
public static void main(String[] args) throws Exception {
CLI cli = new CLI(new Configuration());
System.exit(cli.run(args));
System.exit(ToolRunner.run(cli, args));
}

/**
Expand All @@ -81,6 +83,7 @@ private enum Operations {

private static void printUsage() {
System.err.println(USAGES);
GenericOptionsParser.printGenericCommandUsage(System.err);
}

public static boolean isValidProtocol(String protocol) {
Expand All @@ -107,7 +110,7 @@ public int run(String[] args) throws Exception {
sendLogLevelRequest();
} catch (HadoopIllegalArgumentException e) {
printUsage();
throw e;
return -1;
}
return 0;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Installation

Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions.

Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load.
Typically one machine in the cluster is designated as the NameNode and another machine as the ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load.

The rest of the machines in the cluster act as both DataNode and NodeManager. These are the workers.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,9 @@ Manage keys via the KeyProvider. For details on KeyProviders, see the [Transpare

Providers frequently require that a password or other secret is supplied. If the provider requires a password and is unable to find one, it will use a default password and emit a warning message that the default password is being used. If the `-strict` flag is supplied, the warning message becomes an error message and the command returns immediately with an error status.

NOTE: Some KeyProviders (e.g. org.apache.hadoop.crypto.key.JavaKeyStoreProvider) does not support uppercase key names.
NOTE: Some KeyProviders (e.g. org.apache.hadoop.crypto.key.JavaKeyStoreProvider) do not support uppercase key names.

NOTE: Some KeyProviders do not directly execute a key deletion (e.g. performs a soft-delete instead, or delay the actual deletion, to prevent mistake). In these cases, one may encounter errors when creating/deleting a key with the same name after deleting it. Please check the underlying KeyProvider for details.

### `trace`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,15 +68,15 @@ Wire compatibility concerns data being transmitted over the wire between Hadoop
#### Use Cases

* Client-Server compatibility is required to allow users to continue using the old clients even after upgrading the server (cluster) to a later version (or vice versa). For example, a Hadoop 2.1.0 client talking to a Hadoop 2.3.0 cluster.
* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (including new fields in data structures) that have not yet deployed to the cluster can expect link exceptions.
* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (including new fields in data structures) that have not yet been deployed to the cluster can expect link exceptions.
* Client-Server compatibility is also required to allow upgrading individual components without upgrading others. For example, upgrade HDFS from version 2.1.0 to 2.2.0 without upgrading MapReduce.
* Server-Server compatibility is required to allow mixed versions within an active cluster so the cluster may be upgraded without downtime in a rolling fashion.

#### Policy

* Both Client-Server and Server-Server compatibility is preserved within a major release. (Different policies for different categories are yet to be considered.)
* Compatibility can be broken only at a major release, though breaking compatibility even at major releases has grave consequences and should be discussed in the Hadoop community.
* Hadoop protocols are defined in .proto (ProtocolBuffers) files. Client-Server protocols and Server-protocol .proto files are marked as stable. When a .proto file is marked as stable it means that changes should be made in a compatible fashion as described below:
* Hadoop protocols are defined in .proto (ProtocolBuffers) files. Client-Server protocols and Server-Server protocol .proto files are marked as stable. When a .proto file is marked as stable it means that changes should be made in a compatible fashion as described below:
* The following changes are compatible and are allowed at any time:
* Add an optional field, with the expectation that the code deals with the field missing due to communication with an older version of the code.
* Add a new rpc/method to the service
Expand All @@ -101,7 +101,7 @@ Wire compatibility concerns data being transmitted over the wire between Hadoop

### Java Binary compatibility for end-user applications i.e. Apache Hadoop ABI

As Apache Hadoop revisions are upgraded end-users reasonably expect that their applications should continue to work without any modifications. This is fulfilled as a result of support API compatibility, Semantic compatibility and Wire compatibility.
As Apache Hadoop revisions are upgraded end-users reasonably expect that their applications should continue to work without any modifications. This is fulfilled as a result of supporting API compatibility, Semantic compatibility and Wire compatibility.

However, Apache Hadoop is a very complex, distributed system and services a very wide variety of use-cases. In particular, Apache Hadoop MapReduce is a very, very wide API; in the sense that end-users may make wide-ranging assumptions such as layout of the local disk when their map/reduce tasks are executing, environment variables for their tasks etc. In such cases, it becomes very hard to fully specify, and support, absolute compatibility.

Expand All @@ -115,12 +115,12 @@ However, Apache Hadoop is a very complex, distributed system and services a very

* Existing MapReduce, YARN & HDFS applications and frameworks should work unmodified within a major release i.e. Apache Hadoop ABI is supported.
* A very minor fraction of applications maybe affected by changes to disk layouts etc., the developer community will strive to minimize these changes and will not make them within a minor version. In more egregious cases, we will consider strongly reverting these breaking changes and invalidating offending releases if necessary.
* In particular for MapReduce applications, the developer community will try our best to support provide binary compatibility across major releases e.g. applications using org.apache.hadoop.mapred.
* In particular for MapReduce applications, the developer community will try our best to support providing binary compatibility across major releases e.g. applications using org.apache.hadoop.mapred.
* APIs are supported compatibly across hadoop-1.x and hadoop-2.x. See [Compatibility for MapReduce applications between hadoop-1.x and hadoop-2.x](../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html) for more details.

### REST APIs

REST API compatibility corresponds to both the request (URLs) and responses to each request (content, which may contain other URLs). Hadoop REST APIs are specifically meant for stable use by clients across releases, even major releases. The following are the exposed REST APIs:
REST API compatibility corresponds to both the requests (URLs) and responses to each request (content, which may contain other URLs). Hadoop REST APIs are specifically meant for stable use by clients across releases, even major ones. The following are the exposed REST APIs:

* [WebHDFS](../hadoop-hdfs/WebHDFS.html) - Stable
* [ResourceManager](../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html)
Expand All @@ -136,7 +136,7 @@ The APIs annotated stable in the text above preserve compatibility across at lea

### Metrics/JMX

While the Metrics API compatibility is governed by Java API compatibility, the actual metrics exposed by Hadoop need to be compatible for users to be able to automate using them (scripts etc.). Adding additional metrics is compatible. Modifying (eg changing the unit or measurement) or removing existing metrics breaks compatibility. Similarly, changes to JMX MBean object names also break compatibility.
While the Metrics API compatibility is governed by Java API compatibility, the actual metrics exposed by Hadoop need to be compatible for users to be able to automate using them (scripts etc.). Adding additional metrics is compatible. Modifying (e.g. changing the unit or measurement) or removing existing metrics breaks compatibility. Similarly, changes to JMX MBean object names also break compatibility.

#### Policy

Expand All @@ -148,7 +148,7 @@ User and system level data (including metadata) is stored in files of different

#### User-level file formats

Changes to formats that end-users use to store their data can prevent them for accessing the data in later releases, and hence it is highly important to keep those file-formats compatible. One can always add a "new" format improving upon an existing format. Examples of these formats include har, war, SequenceFileFormat etc.
Changes to formats that end-users use to store their data can prevent them from accessing the data in later releases, and hence it is highly important to keep those file-formats compatible. One can always add a "new" format improving upon an existing format. Examples of these formats include har, war, SequenceFileFormat etc.

##### Policy

Expand Down Expand Up @@ -185,7 +185,7 @@ Depending on the degree of incompatibility in the changes, the following potenti

### Command Line Interface (CLI)

The Hadoop command line programs may be use either directly via the system shell or via shell scripts. Changing the path of a command, removing or renaming command line options, the order of arguments, or the command return code and output break compatibility and may adversely affect users.
The Hadoop command line programs may be used either directly via the system shell or via shell scripts. Changing the path of a command, removing or renaming command line options, the order of arguments, or the command return code and output break compatibility and may adversely affect users.

#### Policy

Expand Down
Loading