Skip to content
This repository was archived by the owner on Apr 10, 2025. It is now read-only.

Commit b5128df

Browse files
authored
Initial commit
0 parents  commit b5128df

13 files changed

+370
-0
lines changed

.ci/run-elasticsearch.sh

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
#!/usr/bin/env bash
2+
#
3+
environment=($(cat <<-END
4+
--env ELASTIC_PASSWORD=changeme
5+
--env node.name=elasticsearch-serverless
6+
--env cluster.name=elasticsearch-serverless
7+
--env cluster.initial_master_nodes=elasticsearch-serverless
8+
--env discovery.seed_hosts=instance
9+
--env cluster.routing.allocation.disk.threshold_enabled=false
10+
--env bootstrap.memory_lock=true
11+
--env node.attr.testattr=test
12+
--env path.repo=/tmp
13+
--env repositories.url.allowed_urls=http://snapshot.test*
14+
--env action.destructive_requires_name=false
15+
--env ingest.geoip.downloader.enabled=false
16+
--env cluster.deprecation_indexing.enabled=false
17+
--env xpack.security.enabled=false
18+
--env xpack.security.http.ssl.enabled=false
19+
END
20+
))
21+
22+
export DETACH=${DETACH-false}
23+
24+
docker run \
25+
--name elasticsearch-serverless \
26+
--network elastic \
27+
--env "ES_JAVA_OPTS=-Des.serverless=true -Xms1g -Xmx1g -da:org.elasticsearch.xpack.ccr.index.engine.FollowingEngineAssertions" \
28+
"${environment[@]}" \
29+
--volume serverless-data:/usr/share/elasticsearch/data \
30+
--publish 9200:9200 \
31+
--ulimit nofile=65536:65536 \
32+
--ulimit memlock=-1:-1 \
33+
--detach=$DETACH \
34+
--rm \
35+
docker.elastic.co/elasticsearch/elasticsearch:$STACK_VERSION;

.github/workflows/tests.yml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
name: main
2+
on: [push]
3+
4+
jobs:
5+
rspec:
6+
strategy:
7+
matrix:
8+
<<[lang]>>: [ ]
9+
runs-on: ubuntu-latest
10+
steps:
11+
- uses: actions/checkout@v2
12+
- uses: <<[lang]>>/setup-<<[lang]>>@v1
13+
with:
14+
<<[lang]>>-version: ${{ matrix.<<[lang]>> }}
15+
- name: Build and test with
16+
run: |

.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
*.gem
2+
.bundle
3+
Gemfile.lock
4+
.DS_Store
5+
tmp
6+
*.log

CONTRIBUTING.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
Describe how to install the library for development purposes.
2+
3+
### Run Tests
4+
5+
6+
7+
### Run Elasticsearch Serverless Docker container
8+
9+
10+
11+
### Contributing Code Changes
12+
13+
1. Please make sure you have signed the [Contributor License
14+
Agreement](http://www.elastic.co/contributor-agreement/). We are not
15+
asking you to assign copyright to us, but to give us the right to distribute
16+
your code without restriction. We ask this of all contributors in order to
17+
assure our users of the origin and continuing existence of the code. You only
18+
need to sign the CLA once.
19+
2. Rebase your changes. Update your local repository with the most recent code
20+
from the main `elasticsearch-serverless-<<[lang]>>` repository and rebase your branch
21+
on top of the latest `main` branch.
22+
3. Submit a pull request. Push your local changes to your forked repository
23+
and [submit a pull request](https://github.com/elastic/elasticsearch-serverless/pulls)
24+
and mention the issue number if any (`Closes #123`) Make sure that you
25+
add or modify tests related to your changes so that CI will pass.
26+
4. Sit back and wait. There may be some discussion on your pull request and
27+
if changes are needed we would love to work with you to get your pull request
28+
merged into `elasticsearch-serverless-<<[lang]>>`.

LICENSE

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
Copyright 2022 Elasticsearch B.V (https://www.elastic.co)
2+
3+
Permission is hereby granted, free of charge, to any person obtaining a copy
4+
of this software and associated documentation files (the "Software"), to deal
5+
in the Software without restriction, including without limitation the rights
6+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7+
copies of the Software, and to permit persons to whom the Software is
8+
furnished to do so, subject to the following conditions:
9+
10+
The above copyright notice and this permission notice shall be included in
11+
all copies or substantial portions of the Software.
12+
13+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19+
THE SOFTWARE.

README.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# How to use this template
2+
3+
This is the template structure for an Elasticsearch Serverless Client.
4+
It can be used to start a new repository for a specific client (e.g. elasticsearch-serverless-php).
5+
6+
## Parameters
7+
8+
When you create a new repository using this template you have to customize the
9+
contents according to the programming language of the client.
10+
11+
We offered an automatic tool to search and replace occurrences of `<<[lang]>>` and
12+
`<<[LANG]>>` in all the files.
13+
14+
The `<<[lang]>>` is the placeholder for the repository name `elasticsearch-serverless-<<[lang]>>`.
15+
For instance, if `<<[lang]>> = "php"` then the repository will be `elasticsearch-serverless-php`.
16+
17+
The `<<[LANG]>>` is the placeholder for the language name. For instance, `<<[LANG]>> = "PHP"`.
18+
This is typically used in README or documentation files.
19+
20+
## How to customize the files
21+
22+
You can customize the files replacing the parameters using the following command:
23+
24+
```bash
25+
customize.sh <<[lang]>>
26+
```
27+
28+
where `<<[lang]>>` is the language specific client to use (e.g. `customize.sh php`).
29+
This will search & replace all the occurrencies of `<<[lang]>>` and `<<[LANG]>>` in some files
30+
(e.g. README, CONTRIBUTING, etc).
31+
32+
## Choose the correct LICENSE
33+
34+
You need to choose the LICENSE for your custom client. Remember to change the `LICENSE` file.

README_TEMPLATE.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Elasticsearch Serverless Client
2+
3+
[![main](https://github.com/elastic/elasticsearch-serverless-<<[lang]>>/actions/workflows/tests.yml/badge.svg?branch=main)](https://github.com/elastic/elasticsearch-serverless-<<[lang]>>/actions/workflows/tests.yml)
4+
5+
This is the official Elastic client for the **Elasticsearch Serverless** service. If you're looking to develop your <<[LANG]>> application with the Elasticsearch Stack, you should look at the [Elasticsearch Client](https://github.com/elastic/elasticsearch-<<[lang]>>) instead. If you're looking to develop your {LANG} application with Elastic Enterprise Search, you should look at the [Enterprise Search Client](https://github.com/elastic/enterprise-search-<<[lang]>>/).
6+
7+
## Installation
8+
9+
10+
### Instantiate a Client
11+
12+
13+
### Using the API
14+
15+
16+
Once you've instantiated a client with your API key and Elasticsearch endpoint, you can start ingesting documents into Elasticsearch Service. You can use the **Bulk API** for this. This API allows you to index, update and delete several documents in one request. You call the `bulk` API on the client with a body parameter, an Array of hashes that define the action and a document. Here's an example of indexing some classic books into the `books` index:
17+
18+
```<<[lang]>>
19+
# First we build our data:
20+
body = [
21+
{ index: { _index: 'books', data: {name: "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470} } },
22+
{ index: { _index: 'books', data: {name: "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585} } },
23+
{ index: { _index: 'books', data: {name: "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328} } },
24+
{ index: { _index: 'books', data: {name: "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227} } },
25+
{ index: { _index: 'books', data: {name: "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268} } },
26+
{ index: { _index: 'books', data: {name: "The Handmaid's Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311} } }
27+
]
28+
# Then we send the data via the bulk api:
29+
> response = client.bulk(body: body)
30+
# And we can check that the items were indexed and given an id in the response:
31+
> response['items']
32+
=>
33+
[{"index"=>{"_index"=>"books", "_id"=>"Pdink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>0, "_primary_term"=>1, "status"=>201}},
34+
{"index"=>{"_index"=>"books", "_id"=>"Ptink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>1, "_primary_term"=>1, "status"=>201}},
35+
{"index"=>{"_index"=>"books", "_id"=>"P9ink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>2, "_primary_term"=>1, "status"=>201}},
36+
{"index"=>{"_index"=>"books", "_id"=>"QNink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>3, "_primary_term"=>1, "status"=>201}},
37+
{"index"=>{"_index"=>"books", "_id"=>"Qdink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>4, "_primary_term"=>1, "status"=>201}},
38+
{"index"=>{"_index"=>"books", "_id"=>"Qtink4cBmDx329iqhzM2", "_version"=>1, "result"=>"created", "_shards"=>{"total"=>2, "successful"=>1, "failed"=>0}, "_seq_no"=>5, "_primary_term"=>1, "status"=>201}}]
39+
40+
```
41+
42+
When you use the client to make a request to Elasticsearch, it will return an API Response object. You can see the HTTP return code by calling `status` and the HTTP headers by calling `headers` on the response object. The Response object behaves as a Hash too, so you can access the body values directly as seen on the previous example with `response['items']`.
43+
44+
Now that some data is available, you can search your documents using the **Search API**:
45+
46+
```<<[lang]>>
47+
> response = client.search(index: 'books', q: 'snow')
48+
> response['hits']['hits']
49+
=> [{"_index"=>"books", "_id"=>"Pdink4cBmDx329iqhzM2", "_score"=>1.5904956, "_source"=>{"name"=>"Snow Crash", "author"=>"Neal Stephenson", "release_date"=>"1992-06-01", "page_count"=>470}}]
50+
```
51+
52+
## Development
53+
54+
See [CONTRIBUTING](./CONTRIBUTING.md).
55+
56+
### Docs
57+
58+
Some questions, assumptions and general notes about this project can be found in [the docs directory](./docs/questions-and-assumptions.md).

customize.sh

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
#!/usr/bin/env bash
2+
3+
dir=$(dirname "$(realpath -s "$0")")
4+
5+
lang=$1
6+
LANG="${lang^}"
7+
8+
# Update README.md
9+
sed -i "s/<<\[lang\]>>/$lang/" $dir/README_TEMPLATE.md
10+
sed -i "s/<<\[LANG\]>>/$LANG/" $dir/README_TEMPLATE.md
11+
12+
# CONTRIBUTING.md
13+
sed -i "s/<<\[lang\]>>/$lang/" $dir/CONTRIBUTING.md
14+
sed -i "s/<<\[LANG\]>>/$LANG/" $dir/CONTRIBUTING.md
15+
16+
# docs/getting-started.MDX
17+
sed -i "s/<<\[lang\]>>/$lang/" $dir/docs/getting-started.MDX
18+
sed -i "s/<<\[LANG\]>>/$LANG/" $dir/docs/getting-started.MDX
19+
20+
# docs/landing-page.MDX
21+
sed -i "s/<<\[lang\]>>/$lang/" $dir/docs/landing-page.MDX
22+
sed -i "s/<<\[LANG\]>>/$LANG/" $dir/docs/landing-page.MDX
23+
24+
# .github/workflows/test.yml
25+
sed -i "s/<<\[lang\]>>/$lang/" $dir/.github/workflows/test.yml
26+
sed -i "s/<<\[LANG\]>>/$LANG/" $dir/.github/workflows/test.yml
27+
28+
# Clean up
29+
rm README.md
30+
mv README_TEMPLATE.md README.md
31+
rm customize

docs/getting-started.MDX

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
---
2+
id: gettingStarted
3+
slug: /serverless-<<[lang]>>/docs/getting-started
4+
title: Getting started with the Serverless <<[LANG]>> client
5+
description: This page contains quickstart information about the Serverless <<[LANG]>> client.
6+
date: 2023-04-27
7+
tags: ['serverless','<<[LANG]>> client','docs', 'getting started']
8+
---
9+
10+
This page guides you through the installation process of the Serverless <<[LANG]>>
11+
client, shows you how to instantiate the client, and how to perform basic
12+
Elasticsearch operations with it.
13+
14+
15+
## Requirements
16+
17+
* <<[LANG]>> x or higher installed on your system.
18+
19+
20+
## Installation
21+
22+
23+
### Using the command line
24+
25+
You can install the Elasticsearch Serverless <<[LANG]>> client with the following
26+
commands:
27+
28+
```bash
29+
```
30+
31+
32+
## Instantiate a client
33+
34+
You can instantiate a client by running the following command:
35+
36+
```<<[lang]>>
37+
38+
```
39+
40+
You can find the Elasticsearch endpoint on the Cloud deployment management page.
41+
42+
<DocImage url="images/copy-endpoint.gif" alt="Copy the endpoint for Elasticsearch"/>
43+
44+
You can create a new API Key under **Stack Management** > **Security**:
45+
46+
<DocImage url="images/setup-api-key.gif" alt="Create and copy Apy Key"/>
47+
48+
49+
## Using the API
50+
51+
After you instantiated a client with your API key and Elasticsearch endpoint,
52+
you can start ingesting documents into the Elasticsearch Service. You can use
53+
the Bulk API for this. This API enables you to index, update, and delete several
54+
documents in one request.
55+
56+
57+
### Creating an index and ingesting documents
58+
59+
You can call the `bulk` API with a body parameter, an array of hashes that
60+
define the action, and a document.
61+
62+
The following is an example of indexing some classic books into the `books`
63+
index:
64+
65+
```<<[lang]>>
66+
67+
```
68+
69+
When you use the client to make a request to Elasticsearch, it returns an API
70+
response object. You can check the HTTP return code by calling `status` and the
71+
HTTP headers by calling `headers` on the response object. The response object
72+
also behaves as a Hash, so you can access the body values directly as seen on
73+
the previous example with ``.
74+
75+
76+
### Searching
77+
78+
Now that some data is available, you can search your documents using the
79+
**Search API**:
80+
81+
```<<[lang]>>
82+
```

docs/images/copy-endpoint.gif

327 KB
Loading

0 commit comments

Comments
 (0)