Skip to content

Commit 6c28705

Browse files
committed
Update docs
1 parent 6f76150 commit 6c28705

33 files changed

+133
-14
lines changed

docs/aggregations/bucket/parent/parent-aggregation-usage.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ new ParentAggregation("name_of_parent_agg", typeof(CommitActivity)) <1>
4747
}
4848
----
4949
<1> `join` field is determined from the _child_ type. In this example, it is `CommitActivity`
50+
5051
<2> sub-aggregations are on the type determined from the `join` field. In this example, a `Project` is a parent of `CommitActivity`
5152

5253
[source,javascript]

docs/aggregations/writing-aggregations.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -232,6 +232,7 @@ return s => s
232232
);
233233
----
234234
<1> a list of aggregation functions to apply
235+
235236
<2> Using LINQ's `Aggregate()` function to accumulate/apply all of the aggregation functions
236237

237238
[[handling-aggregate-response]]
@@ -275,5 +276,6 @@ var maxPerChild = childAggregation.Max("max_per_child");
275276
maxPerChild.Should().NotBeNull(); <2>
276277
----
277278
<1> Do something with the average per child. Here we just assert it's not null
279+
278280
<2> Do something with the max per child. Here we just assert it's not null
279281

docs/client-concepts/connection-pooling/building-blocks/connection-pooling.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,7 @@ var pool = new CloudConnectionPool(cloudId, credentials); <2>
9797
var client = new ElasticClient(new ConnectionSettings(pool));
9898
----
9999
<1> a username and password that can access Elasticsearch service on Elastic Cloud
100+
100101
<2> `cloudId` is a value that can be retrieved from the Elastic Cloud web console
101102

102103
This type of pool, like its parent the `SingleNodeConnectionPool`, is hardwired to opt out of

docs/client-concepts/connection-pooling/exceptions/unexpected-exceptions.asciidoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,8 +58,11 @@ audit = await audit.TraceUnexpectedException(
5858
);
5959
----
6060
<1> set up a cluster with 10 nodes
61+
6162
<2> where node 2 on port 9201 always throws an exception
63+
6264
<3> The first call to 9200 returns a healthy response
65+
6366
<4> ...but the second call, to 9201, returns a bad response
6467

6568
Sometimes, an unexpected exception happens further down in the pipeline. In this scenario, we
@@ -98,7 +101,9 @@ audit = await audit.TraceUnexpectedException(
98101
);
99102
----
100103
<1> calls on 9200 set up to throw a `HttpRequestException` or a `WebException`
104+
101105
<2> calls on 9201 set up to throw an `Exception`
106+
102107
<3> Assert that the audit trail for the client call includes the bad response from 9200 and 9201
103108

104109
An unexpected hard exception on ping and sniff is something we *do* try to recover from and failover to retrying on the next node.
@@ -143,6 +148,8 @@ audit = await audit.TraceUnexpectedException(
143148
);
144149
----
145150
<1> `InnerException` is the exception that brought the request down
151+
146152
<2> The hard exception that happened on ping is still available though
153+
147154
<3> An exception can be hard to relate back to a point in time, so the exception is also available on the audit trail
148155

docs/client-concepts/connection-pooling/exceptions/unrecoverable-exceptions.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ var audit = new Auditor(() => VirtualClusterWith
8181
);
8282
----
8383
<1> Always succeed on ping
84+
8485
<2> ...but always fail on calls with a 401 Bad Authentication response
8586

8687
Now, let's make a client call. We'll see that the first audit event is a successful ping
@@ -101,7 +102,9 @@ audit = await audit.TraceElasticsearchException(
101102
);
102103
----
103104
<1> First call results in a successful ping
105+
104106
<2> Second call results in a bad response
107+
105108
<3> The reason for the bad response is Bad Authentication
106109

107110
When a bad authentication response occurs, the client attempts to deserialize the response body returned;
@@ -135,6 +138,7 @@ audit = await audit.TraceElasticsearchException(
135138
);
136139
----
137140
<1> Always return a 401 bad response with a HTML response on client calls
141+
138142
<2> Assert that the response body bytes are null
139143

140144
Now in this example, by turning on `DisableDirectStreaming()` on `ConnectionSettings`, we see the same behaviour exhibited
@@ -169,5 +173,6 @@ audit = await audit.TraceElasticsearchException(
169173
);
170174
----
171175
<1> Response bytes are set on the response
176+
172177
<2> Assert that the response contains `"nginx/"`
173178

docs/client-concepts/connection-pooling/max-retries/respects-max-retry.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ audit = await audit.TraceCall(
8484
);
8585
----
8686
<1> Set the maximum number of retries to 3
87+
8788
<2> The client call trace returns an `MaxRetriesReached` audit after the initial attempt and the number of retries allowed
8889

8990
In our previous example we simulated very fast failures, but in the real world, a call might take upwards of a second.

docs/client-concepts/connection-pooling/pinging/first-usage.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,9 +92,13 @@ await audit.TraceCalls(
9292
);
9393
----
9494
<1> The first call goes to 9200, which succeeds
95+
9596
<2> The 2nd call does a ping on 9201 because its used for the first time. This fails
97+
9698
<3> So we ping 9202. This _also_ fails
99+
97100
<4> We then ping 9203 because we haven't used it before and it succeeds
101+
98102
<5> Finally, we assert that the connection pool has two nodes that are marked as dead
99103

100104
All nodes are pinged on first use, provided they are healthy
@@ -121,5 +125,6 @@ await audit.TraceCalls(
121125
);
122126
----
123127
<1> Pings on nodes always succeed
128+
124129
<2> A successful ping on each node
125130

docs/client-concepts/connection-pooling/request-overrides/disable-sniff-ping-per-request.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,11 @@ audit = await audit.TraceCalls(
6565
);
6666
----
6767
<1> disable sniffing
68+
6869
<2> first call is a successful ping
70+
6971
<3> sniff on startup call happens here, on the second call
72+
7073
<4> No sniff on startup again
7174

7275
Now, let's disable pinging on the request
@@ -90,6 +93,7 @@ audit = await audit.TraceCall(
9093
);
9194
----
9295
<1> disable ping
96+
9397
<2> No ping after sniffing
9498

9599
Finally, let's demonstrate disabling both sniff and ping on the request
@@ -111,5 +115,6 @@ audit = await audit.TraceCall(
111115
);
112116
----
113117
<1> disable ping and sniff
118+
114119
<2> no ping or sniff before the call
115120

docs/client-concepts/connection-pooling/round-robin/skip-dead-nodes.asciidoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,9 @@ await audit.TraceCalls(
140140
);
141141
----
142142
<1> The first call goes to 9200 which succeeds
143+
143144
<2> The 2nd call does a ping on 9201 because its used for the first time. It fails so we wrap over to node 9202
145+
144146
<3> The next call goes to 9203 which fails so we should wrap over
145147

146148
A cluster with 2 nodes where the second node fails on ping
@@ -191,5 +193,6 @@ await audit.TraceCalls(
191193
);
192194
----
193195
<1> All the calls fail
196+
194197
<2> After all our registered nodes are marked dead we want to sample a single dead node each time to quickly see if the cluster is back up. We do not want to retry all 4 nodes
195198

docs/client-concepts/connection-pooling/sniffing/on-connection-failure.asciidoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,9 +79,13 @@ audit = await audit.TraceCalls(
7979
);
8080
----
8181
<1> When the call fails on 9201, the following sniff succeeds and returns a new cluster state of healthy nodes. This cluster only has 3 nodes and the known masters are 9200 and 9202. A search on 9201 is setup to still fail once
82+
8283
<2> After this second failure on 9201, another sniff will happen which returns a cluster state that no longer fails but looks completely different; It's now three nodes on ports 9210 - 9212, with 9210 and 9212 being master eligible.
84+
8385
<3> We assert we do a sniff on our first known master node 9202 after the failed call on 9201
86+
8487
<4> Our pool should now have three nodes
88+
8589
<5> We assert we do a sniff on the first master node in our updated cluster
8690

8791
==== Sniffing after ping failure
@@ -147,8 +151,11 @@ audit = await audit.TraceCalls(
147151
);
148152
----
149153
<1> We assert we do a sniff on our first known master node 9202
154+
150155
<2> Our pool should now have three nodes
156+
151157
<3> We assert we do a sniff on the first master node in our updated cluster
158+
152159
<4> 9210 was already pinged after the sniff returned the new nodes
153160

154161
==== Client uses publish address

docs/client-concepts/connection-pooling/sniffing/on-startup.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,7 @@ await audit.TraceCall(new ClientCall {
120120
});
121121
----
122122
<1> Sniffing returns 8 nodes, starting from 9204
123+
123124
<2> After successfully sniffing, the ping now happens on 9204
124125

125126
==== Prefers master eligible nodes

docs/client-concepts/connection-pooling/sniffing/role-detection.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -138,6 +138,7 @@ var audit = new Auditor(() => VirtualClusterWith
138138
};
139139
----
140140
<1> Before the sniff, assert we only see three master only nodes
141+
141142
<2> After the sniff, assert we now know about the existence of 20 nodes.
142143

143144
After the sniff has happened on 9200 before the first API call, assert that the subsequent API
@@ -218,7 +219,9 @@ var audit = new Auditor(() => VirtualClusterWith
218219
};
219220
----
220221
<1> for testing simplicity, disable pings
222+
221223
<2> We only want to execute API calls to nodes in rack_one
224+
222225
<3> After sniffing on startup, assert that the pool of nodes that the client will execute API calls against only contains the three nodes that are in `rack_one`
223226

224227
With the cluster set up, assert that the sniff happens on 9200 before the first API call
@@ -295,6 +298,8 @@ await audit.TraceUnexpectedElasticsearchException(new ClientCall
295298
});
296299
----
297300
<1> The audit trail indicates a sniff for the very first time on startup
301+
298302
<2> The sniff succeeds because the node predicate is ignored when sniffing
303+
299304
<3> when trying to do an actual API call however, the predicate prevents any nodes from being attempted
300305

docs/client-concepts/high-level/analysis/writing-analyzers.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,7 @@ var createIndexResponse = _client.Indices.Create("my-index", c => c
100100
);
101101
----
102102
<1> Pre-defined list of English stopwords within Elasticsearch
103+
103104
<2> Use the `standard_english` analyzer configured
104105

105106
[source,javascript]
@@ -261,6 +262,7 @@ var createIndexResponse = _client.Indices.Create("questions", c => c
261262
);
262263
----
263264
<1> Use an analyzer at index time that strips HTML tags
265+
264266
<2> Use an analyzer at search time that does not strip HTML tags
265267

266268
With this in place, the text of a question body will be analyzed with the `index_question` analyzer

docs/client-concepts/high-level/getting-started.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,7 @@ var indexResponse = client.IndexDocument(person); <1>
107107
var asyncIndexResponse = await client.IndexDocumentAsync(person); <2>
108108
----
109109
<1> synchronous method that returns an `IndexResponse`
110+
110111
<2> asynchronous method that returns a `Task<IndexResponse>` that can be awaited
111112

112113
NOTE: All methods available within NEST are exposed as both synchronous and asynchronous versions,

docs/client-concepts/high-level/indexing/indexing-documents.asciidoc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ if (!indexResponse.IsValid)
4040
var indexResponseAsync = await client.IndexDocumentAsync(person); <2>
4141
----
4242
<1> synchronous method that returns an IIndexResponse
43+
4344
<2> asynchronous method that returns a Task<IIndexResponse> that can be awaited
4445

4546
==== Single documents with parameters
@@ -61,6 +62,7 @@ client.Index(person, i => i.Index("people")); <1>
6162
client.Index(new IndexRequest<Person>(person, "people")); <2>
6263
----
6364
<1> fluent syntax
65+
6466
<2> object initializer syntax
6567

6668
==== Multiple documents with `IndexMany`
@@ -111,8 +113,11 @@ if (indexManyResponse.Errors) <2>
111113
var indexManyAsyncResponse = await client.IndexManyAsync(people); <4>
112114
----
113115
<1> synchronous method that returns an IBulkResponse
116+
114117
<2> the response can be inspected to see if any of the bulk operations resulted in an error
118+
115119
<3> If there are errors, they can be enumerated and inspected
120+
116121
<4> asynchronous method that returns a Task<IBulkResponse> that can be awaited
117122

118123
==== Multiple documents with bulk
@@ -136,6 +141,7 @@ var asyncBulkIndexResponse = await client.BulkAsync(b => b
136141
.IndexMany(people)); <2>
137142
----
138143
<1> synchronous method that returns an IBulkResponse, the same as IndexMany and can be inspected in the same way for errors
144+
139145
<2> asynchronous method that returns a Task<IBulkResponse> that can be awaited
140146

141147
==== Multiple documents with `BulkAllObservable` helper
@@ -167,8 +173,11 @@ var bulkAllObservable = client.BulkAll(people, b => b
167173
});
168174
----
169175
<1> how long to wait between retries
176+
170177
<2> how many retries are attempted if a failure occurs
178+
171179
<3> items per bulk request
180+
172181
<4> perform the indexing and wait up to 15 minutes, whilst the BulkAll calls are asynchronous this is a blocking operation
173182

174183
==== Advanced bulk indexing
@@ -204,7 +213,10 @@ client.BulkAll(people, b => b
204213
}));
205214
----
206215
<1> customise the individual operations in the bulk request before it is dispatched
216+
207217
<2> Index each document into either even-index or odd-index
218+
208219
<3> decide if a document should be retried in the event of a failure
220+
209221
<4> if a document cannot be indexed this delegate is called
210222

docs/client-concepts/high-level/indexing/ingest-nodes.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,5 +57,6 @@ var settings = new ConnectionSettings(pool).NodePredicate(n => n.IngestEnabled);
5757
var indexingClient = new ElasticClient(settings);
5858
----
5959
<1> list of cluster nodes
60+
6061
<2> predicate to select only nodes with ingest capabilities
6162

docs/client-concepts/high-level/indexing/pipelines.asciidoc

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,12 +92,19 @@ var person = new Person
9292
var indexResponse = client.Index(person, p => p.Index("people").Pipeline("person-pipeline")); <8>
9393
----
9494
<1> automatically create the mapping from the type
95+
9596
<2> create an additional field to store the initials
97+
9698
<3> map field as IP Address type
99+
97100
<4> map GeoIp as object
101+
98102
<5> uppercase the lastname
103+
99104
<6> use a painless script to populate the new field
105+
100106
<7> use ingest-geoip plugin to enrich the GeoIp object from the supplied IP Address
107+
101108
<8> index the document using the created pipeline
102109

103110
==== Increasing timeouts
@@ -122,5 +129,6 @@ client.Bulk(b => b
122129
);
123130
----
124131
<1> increases the server-side bulk timeout
132+
125133
<2> increases the HTTP request timeout
126134

docs/client-concepts/high-level/inference/field-inference.asciidoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -513,10 +513,15 @@ private class Precedence
513513
}
514514
----
515515
<1> Even though this property has various attributes applied we provide an override on ConnectionSettings later that takes precedence.
516+
516517
<2> Has a `TextAttribute`, `PropertyNameAttribute` and a `JsonPropertyAttribute` - the `TextAttribute` takes precedence.
518+
517519
<3> Has both a `PropertyNameAttribute` and a `JsonPropertyAttribute` - the `PropertyNameAttribute` takes precedence.
520+
518521
<4> `JsonPropertyAttribute` takes precedence.
522+
519523
<5> This property we are going to hard code in our custom serializer to resolve to ask.
524+
520525
<6> We are going to register a DefaultFieldNameInferrer on ConnectionSettings that will uppercase all properties.
521526

522527
We'll create a custom `IPropertyMappingProvider` that renames any property named `AskSerializer` to `ask`.
@@ -562,7 +567,9 @@ usingSettings.Expect("data").ForField(Field<Precedence>(p => p.DataMember));
562567
usingSettings.Expect("DEFAULTFIELDNAMEINFERRER").ForField(Field<Precedence>(p => p.DefaultFieldNameInferrer));
563568
----
564569
<1> Rename on the mapping for the `Precedence` type
570+
565571
<2> Default inference for a field, if no other rules apply or are specified for a given field
572+
566573
<3> Hook up the custom `IPropertyMappingProvider`
567574

568575
The same naming rules also apply when indexing a document

docs/client-concepts/high-level/inference/index-name-inference.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ var client = new ElasticClient(settings);
8585
var projectSearchResponse = client.Search<Project>();
8686
----
8787
<1> a default index to use, when no other index can be inferred
88+
8889
<2> a index to use when `Project` is the target POCO type
8990

9091
will send a search request to the API endpoint

0 commit comments

Comments
 (0)