You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hadoop-hdds/docs/content/beyond/Containers.md
+41-24Lines changed: 41 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,9 @@ Docker heavily is used at the ozone development with three principal use-cases:
25
25
*__dev__:
26
26
* We use docker to start local pseudo-clusters (docker provides unified environment, but no image creation is required)
27
27
*__test__:
28
-
* We create docker images from the dev branches to test ozone in kubernetes and other container orchestator system
29
-
* We provide _apache/ozone_ images for each release to make it easier the evaluation of Ozone. These images are __not__ created __for production__ usage.
28
+
* We create docker images from the dev branches to test ozone in kubernetes and other container orchestrator system
29
+
* We provide _apache/ozone_ images for each release to make it easier for evaluation of Ozone.
30
+
These images are __not__ created __for production__ usage.
30
31
31
32
<divclass="alert alert-warning"role="alert">
32
33
We <b>strongly</b> recommend that you create your own custom images when you
@@ -36,7 +37,7 @@ shipped container images and k8s resources as examples and guides to help you
36
37
</div>
37
38
38
39
*__production__:
39
-
* We document how can you create your own docker image for your production cluster.
40
+
* We have documentation on how you can create your own docker image for your production cluster.
40
41
41
42
Let's check out each of the use-cases in more detail:
42
43
@@ -46,38 +47,41 @@ Ozone artifact contains example docker-compose directories to make it easier to
46
47
47
48
From distribution:
48
49
49
-
```
50
+
```bash
50
51
cd compose/ozone
51
52
docker-compose up -d
52
53
```
53
54
54
-
After a local build
55
+
After a local build:
55
56
56
-
```
57
+
```bash
57
58
cd hadoop-ozone/dist/target/ozone-*/compose
58
59
docker-compose up -d
59
60
```
60
61
61
62
These environments are very important tools to start different type of Ozone clusters at any time.
62
63
63
-
To be sure that the compose files are up-to-date, we also provide acceptance test suites which start the cluster and check the basic behaviour.
64
+
To be sure that the compose files are up-to-date, we also provide acceptance test suites which start
65
+
the cluster and check the basic behaviour.
64
66
65
-
The acceptance tests are part of the distribution, and you can find the test definitions in `./smoketest` directory.
67
+
The acceptance tests are part of the distribution, and you can find the test definitions in `smoketest` directory.
66
68
67
69
You can start the tests from any compose directory:
68
70
69
71
For example:
70
72
71
-
```
73
+
```bash
72
74
cd compose/ozone
73
75
./test.sh
74
76
```
75
77
76
78
### Implementation details
77
79
78
-
`./compose` tests are based on the apache/hadoop-runner docker image. The image itself doesn't contain any Ozone jar file or binary just the helper scripts to start ozone.
80
+
`compose` tests are based on the apache/hadoop-runner docker image. The image itself does not contain
81
+
any Ozone jar file or binary just the helper scripts to start ozone.
79
82
80
-
hadoop-runner provdes a fixed environment to run Ozone everywhere, but the ozone distribution itself is mounted from the including directory:
83
+
hadoop-runner provdes a fixed environment to run Ozone everywhere, but the ozone distribution itself
84
+
is mounted from the including directory:
81
85
82
86
(Example docker-compose fragment)
83
87
@@ -91,7 +95,9 @@ hadoop-runner provdes a fixed environment to run Ozone everywhere, but the ozone
91
95
92
96
```
93
97
94
-
The containers are conigured based on environment variables, but because the same environment variables should be set for each containers we maintain the list of the environment variables in a separated file:
98
+
The containers are configured based on environment variables, but because the same environment
99
+
variables should be set for each containers we maintain the list of the environment variables
As you can see we use naming convention. Based on the name of the environment variable, the appropariate hadoop config XML (`ozone-site.xml` in our case) will be generated by a [script](https://github.com/apache/hadoop/tree/docker-hadoop-runner-latest/scripts) which is included in the `hadoop-runner` base image.
120
+
As you can see we use naming convention. Based on the name of the environment variable, the
121
+
appropriate hadoop config XML (`ozone-site.xml` in our case) will be generated by a
122
+
[script](https://github.com/apache/hadoop/tree/docker-hadoop-runner-latest/scripts) which is
123
+
included in the `hadoop-runner` base image.
115
124
116
-
The [entrypoint](https://github.com/apache/hadoop/blob/docker-hadoop-runner-latest/scripts/starter.sh) of the `hadoop-runner` image contains a helper shell script which triggers this transformation and cab do additional actions (eg. initialize scm/om storage, download required keytabs, etc.) based on environment variables.
125
+
The [entrypoint](https://github.com/apache/hadoop/blob/docker-hadoop-runner-latest/scripts/starter.sh)
126
+
of the `hadoop-runner` image contains a helper shell script which triggers this transformation and
127
+
can do additional actions (eg. initialize scm/om storage, download required keytabs, etc.)
128
+
based on environment variables.
117
129
118
130
## Test/Staging
119
131
120
-
The `docker-compose` based approach is recommended only for local test not for multi node cluster. To use containers on a multi-node cluster we need a Container Orchestrator like Kubernetes.
132
+
The `docker-compose` based approach is recommended only for local test, not for multi node cluster.
133
+
To use containers on a multi-node cluster we need a Container Orchestrator like Kubernetes.
121
134
122
135
Kubernetes example files are included in the `kubernetes` folder.
123
136
124
-
*Please note*: all the provided images are based the `hadoop-runner` image which contains all the required tool for testing in staging environments. For production we recommend to create your own, hardened image with your own base image.
137
+
*Please note*: all the provided images are based the `hadoop-runner` image which contains all the
138
+
required tool for testing in staging environments. For production we recommend to create your own,
139
+
hardened image with your own base image.
125
140
126
141
### Test the release
127
142
128
143
The release can be tested with deploying any of the example clusters:
129
144
130
-
```
145
+
```bash
131
146
cd kubernetes/examples/ozone
132
147
kubectl apply -f
133
148
```
@@ -139,13 +154,13 @@ Plese note that in this case the latest released container will be downloaded fr
139
154
To test a development build you can create your own image and upload it to your own docker registry:
Most of the elements are optional and just helper function but to use the provided example kubernetes resources you may need the scripts from [here](https://github.com/apache/hadoop/tree/docker-hadoop-runner-jdk11/scripts)
181
+
Most of the elements are optional and just helper function but to use the provided example
182
+
kubernetes resources you may need the scripts from
Copy file name to clipboardExpand all lines: hadoop-hdds/docs/content/beyond/DockerCheatSheet.md
+4-3Lines changed: 4 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,9 @@ weight: 4
22
22
limitations under the License.
23
23
-->
24
24
25
-
In the `compose` directory of the ozone distribution there are multiple pseudo-cluster setup which can be used to run Ozone in different way (for example with secure cluster, with tracing enabled, with prometheus etc.).
25
+
In the `compose` directory of the ozone distribution there are multiple pseudo-cluster setup which
26
+
can be used to run Ozone in different way (for example: secure cluster, with tracing enabled,
27
+
with prometheus etc.).
26
28
27
29
If the usage is not document in a specific directory the default usage is the following:
28
30
@@ -31,8 +33,7 @@ cd compose/ozone
31
33
docker-compose up -d
32
34
```
33
35
34
-
The data of the container is ephemeral and deleted together with the docker volumes. To force the deletion of existing data you can always delete all the temporary data:
35
-
36
+
The data of the container is ephemeral and deleted together with the docker volumes.
Copy file name to clipboardExpand all lines: hadoop-hdds/docs/content/concept/Hdds.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ summary: Storage Container Manager or SCM is the core metadata service of Ozone
23
23
24
24
Storage container manager provides multiple critical functions for the Ozone
25
25
cluster. SCM acts as the cluster manager, Certificate authority, Block
26
-
manager and the replica manager.
26
+
manager and the Replica manager.
27
27
28
28
{{<cardtitle="Cluster Management"icon="tasks">}}
29
29
SCM is in charge of creating an Ozone cluster. When an SCM is booted up via <kbd>init</kbd> command, SCM creates the cluster identity and root certificates needed for the SCM certificate authority. SCM manages the life cycle of a data node in the cluster.
Copy file name to clipboardExpand all lines: hadoop-hdds/docs/content/interface/JavaApi.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -74,21 +74,21 @@ It is possible to pass an array of arguments to the createVolume by creating vol
74
74
75
75
Once you have a volume, you can create buckets inside the volume.
76
76
77
-
{{< highlight bash >}}
77
+
{{< highlight java >}}
78
78
// Let us create a bucket called videos.
79
79
assets.createBucket("videos");
80
80
OzoneBucket video = assets.getBucket("videos");
81
81
{{< /highlight >}}
82
82
83
-
At this point we have a usable volume and a bucket. Our volume is called assets and bucket is called videos.
83
+
At this point we have a usable volume and a bucket. Our volume is called _assets_ and bucket is called _videos_.
84
84
85
85
Now we can create a Key.
86
86
87
87
### Reading and Writing a Key
88
88
89
-
With a bucket object the users can now read and write keys. The following code reads a video called intro.mp4 from the local disk and stores in the video bucket that we just created.
89
+
With a bucket object the users can now read and write keys. The following code reads a video called intro.mp4 from the local disk and stores in the _video_ bucket that we just created.
90
90
91
-
{{< highlight bash >}}
91
+
{{< highlight java >}}
92
92
// read data from the file, this is a user provided function.
0 commit comments