Skip to content

Commit

Permalink
Replaced absolute URLs in docs with attributes
Browse files Browse the repository at this point in the history
  • Loading branch information
clintongormley committed Feb 4, 2017
1 parent 0fea2a2 commit e181a02
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 14 deletions.
2 changes: 2 additions & 0 deletions docs/Versions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ release-state can be: released | prerelease | unreleased
:plugins: https://www.elastic.co/guide/en/elasticsearch/plugins/{branch}
:javaclient: https://www.elastic.co/guide/en/elasticsearch/client/java-api/{branch}
:xpack: https://www.elastic.co/guide/en/x-pack/5.0
:logstash: https://www.elastic.co/guide/en/logstash/{branch}
:kibana: https://www.elastic.co/guide/en/kibana/{branch}
:issue: https://github.com/elastic/elasticsearch/issues/
:pull: https://github.com/elastic/elasticsearch/pull/

Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/analysis-icu.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ PUT icu_sample
Collations are used for sorting documents in a language-specific word order.
The `icu_collation` token filter is available to all indices and defaults to
using the
https://www.elastic.co/guide/en/elasticsearch/guide/current/sorting-collations.html#uca[DUCET collation],
{defguide}/sorting-collations.html#uca[DUCET collation],
which is a best-effort attempt at language-neutral sorting.

Below is an example of how to set up a field for sorting German names in
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/discovery-ec2.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,7 @@ Prefer https://aws.amazon.com/amazon-linux-ami/[Amazon Linux AMIs] as since Elas
===== Networking
* Networking throttling takes place on smaller instance types in both the form of https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/[bandwidth and number of connections]. Therefore if large number of connections are needed and networking is becoming a bottleneck, avoid https://aws.amazon.com/ec2/instance-types/[instance types] with networking labeled as `Moderate` or `Low`.
* Multicast is not supported, even when in an VPC; the aws cloud plugin which joins by performing a security group lookup.
* When running in multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability zones] be sure to leverage https://www.elastic.co/guide/en/elasticsearch/reference/master/allocation-awareness.html[shard allocation awareness] so that not all copies of shard data reside in the same availability zone.
* When running in multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability zones] be sure to leverage {ref}/allocation-awareness.html[shard allocation awareness] so that not all copies of shard data reside in the same availability zone.
* Do not span a cluster across regions. If necessary, use a tribe node.

===== Misc
Expand Down
12 changes: 6 additions & 6 deletions docs/plugins/integrations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ Integrations are not plugins, but are external tools or modules that make it eas
Tiki has native support for Elasticsearch. This provides faster & better
search (facets, etc), along with some Natural Language Processing features
(ex.: More like this)

* http://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]:
XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages

[float]
[[data-integrations]]
=== Data import/export and validation
Expand All @@ -41,13 +41,13 @@ releases 2.0 and later do not support rivers.
[float]
==== Supported by Elasticsearch:

* https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html[Logstash output to Elasticsearch]:
* {logstash}/plugins-outputs-elasticsearch.html[Logstash output to Elasticsearch]:
The Logstash `elasticsearch` output plugin.
* https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html[Elasticsearch input to Logstash]
* {logstash}/plugins-inputs-elasticsearch.html[Elasticsearch input to Logstash]
The Logstash `elasticsearch` input plugin.
* https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html[Elasticsearch event filtering in Logstash]
* {logstash}/plugins-filters-elasticsearch.html[Elasticsearch event filtering in Logstash]
The Logstash `elasticsearch` filter plugin.
* https://www.elastic.co/guide/en/logstash/current/plugins-codecs-es_bulk.html[Elasticsearch bulk codec]
* {logstash}/plugins-codecs-es_bulk.html[Elasticsearch bulk codec]
The Logstash `es_bulk` plugin decodes the Elasticsearch bulk format into individual events.

[float]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ Now that we have our node (and cluster) up and running, the next step is to unde
Let's start with a basic health check, which we can use to see how our cluster is doing. We'll be using curl to do this but you can use any tool that allows you to make HTTP/REST calls. Let's assume that we are still on the same node where we started Elasticsearch on and open another command shell window.

To check the cluster health, we will be using the <<cat,`_cat` API>>. You can
run the command below in https://www.elastic.co/guide/en/kibana/{branch}/console-kibana.html[Kibana's Console]
run the command below in {kibana}/console-kibana.html[Kibana's Console]
by clicking "VIEW IN CONSOLE" or with `curl` by clicking the "COPY AS CURL"
link below and pasting it into a terminal.

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/redirects.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,14 +60,14 @@ directory. Instead, mappings should be created using the API with:

The `memcached` transport is no longer supported. Instead use the REST
interface over <<modules-http,HTTP>> or the
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/index.html[Java API].
{javaclient}/index.html[Java API].

[role="exclude",id="modules-thrift"]
=== Thrift

The `thrift` transport is no longer supported. Instead use the REST
interface over <<modules-http,HTTP>> or the
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/index.html[Java API].
{javaclient}/index.html[Java API].

// QUERY DSL

Expand Down
3 changes: 1 addition & 2 deletions docs/reference/search/request/preference.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,7 @@ The `preference` is a query string parameter which can be set to:
preferences but it has to appear first: `_shards:2,3|_primary`

`_only_nodes`::
Restricts the operation to nodes specified in node specification
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html
Restricts the operation to nodes specified in <<cluster,node specification>>

Custom (string) value::
A custom value will be used to guarantee that
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/setup/install.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Elasticsearch website or from our RPM repository.

`docker`::

An image is available for running Elasticsearch as a Docker container. It ships with https://www.elastic.co/guide/en/x-pack/current/index.html[X-Pack] pre-installed and may be downloaded from the Elastic Docker Registry.
An image is available for running Elasticsearch as a Docker container. It ships with {xpack}/index.html[X-Pack] pre-installed and may be downloaded from the Elastic Docker Registry.
+
<<docker>>

Expand Down

0 comments on commit e181a02

Please sign in to comment.