Eskimo is a Big Data Management Web Console to build, manage and operate Big Data 2.0 clusters leveraging on Kubernetes.
Reach http://www.eskimo.sh for more information on Eskimo or look at the documentation in the folder doc
.
The Service Development framework is actually composed by two distinct parts:
-
The Docker Images Development Framework which is used to build the docker images deployed on the eskimo cluster nodes
-
The Services Installation Framework which is used to install these images as services on kubernetes or natively on the eskimo cluster nodes.
This document presents "2. The Services Installation Framework"
The Services Installation Framework provides tools and standards to install the packaged docker images containing the target software component on Kubernetes or natively on the eskimo cluster nodes.
Eskimo is leveraging on Kubernetes to run and operate most of the services and the couple docker and SystemD for the few of them - such as gluster, ntp, etc - that need to run natively on Eskimo cluster nodes.
-
An eskimo package has to be a docker image
-
An eskimo package has to provide
-
either a systemd unit configuration file to enable eskimo to operate the component natively on the cluster nodes.
-
or a Kubernetes YAML configuration file to delegate the deployment and operation to Kubernetes.
-
The Eskimo Web User Interface takes care of installing the services defined in the services.json
configuration file and copies them over to the nodes where they are intended to run (native node or Kubernetes).
After the proper installation, eskimo relies either on plain old systemctl
or kubectl
commands to operate
(deploy / start / stop / restart / query / etc.) the services.
The principle is pretty straightforward:
-
Whenever a service
serviceX
is to be deployed, eskimo makes an archive of the folderservices_setup/serviceX
, copies that archive over to the target node and extracts it in a subfolder of/tmp
. -
Then eskimo calls the script
setup.sh
from within that folder. The scriptsetup.sh
can do whatever it wants but has to respect an important constraint. -
After that
setup.sh
script is properly executed, the service should be-
either installed natively on the node along with a systemd system unit file with name
serviceX.service
which is used to control the serviceX service lifecycle through commands such assystemctl start serviceX
, -
or properly deployed in Kubernetes and executing a POD name prefixed by the service name and a kube service matching it.
-
Aside from the above, nothing is enforced and service developers can implement services the way they want.
Many Eskimo services can leverage on gluster to share data accross cluster nodes.
SystemD services rely on the host to mount gluster shares and then mount the share to the gluster container from the
host mount.
The way to do this is as follows:
-
The service
setup.sh
script calls the script/usr/local/sbin/gluster_mount.sh [SHARE_NAME] [SHARE_PATH] [OWNER_USER]
This script will take care of registering the gluster mount with SystemD, fstab, etc. -
The service SystemD unit file should define a dependency on the SystemD mount by using the following statements
After=gluster.service
After=[SHARE_PATH_HYPHEN-SEPARATED].mount
Using the host to mount gluster shares is interesting since it enables Eskimo users to see the content of the gluster share using the Eskimo File Manager.
The approach is very similar for Kubernetes services, except they can’t be relying on SystemD (which is not available
to Kube containers)
So Kubernetes services actually mount the gluster share directly from inside the docker container.
The way to do this is as follows:
-
The container startup script calls the script
inContainerMountGluster.sh [SHARE_NAME] [SHARE_PATH] [OWNER_USER]
OS system users required to execute Kubernetes and native services are required to be created on every node of the
Eskimo cluster nodes with consistent user IDs across the cluster . For this reason,
the linux system users to be created on every node are not created in the individual services setup.sh
scripts. They
are created by a specific script /usr/local/sbin/eskimo-system-checks.sh
generated at installation time by the eskimo
base system installation script install-eskimo-base-system.sh
.
There are no requirements when setting up a service on a node aside from the constraints mentioned above. Services developers can set up services on nodes the way then want and no specific requirement is enforced by eskimo.
However, adhering to some conventions eases a lot the implementation and maintenance of these services.
These standard conventions are as follows (illustrated for a service called serviceX
).
-
Data persistency
-
Cluster node native Services should put their persistent data (to be persisted between two docker container restart) in
/var/lib/serviceX
which shozld be mounted from the host by the called to docker in the SystemD unit file -
Kubernetes services should either rely on Kubernetes provided persistent storage or use a gluster share.
-
-
Services should put their log files in
/var/log/serviceX
which is mounted from the runtime host. -
If the service requires a file to track its PID, that file should be stored under
/var/run/serviceX
to be mounted from the runtime host as well. -
Whenever a service
serviceX
requires a subfolder of/var/log/serviceX
to be shared among cluster nodes, a scriptsetupServiceXGlusterSares.sh
should be defined that calls the common helper script (define at eskimo base system installation on every node)/usr/local/sbin/gluster_mount.sh
in the following way, for instance to define the flink data share :/usr/local/sbin/gluster_mount.sh flink_data /var/lib/flink/data flink
-
The approach is the same from within a container, but the name if the script to call is different:
/usr/local/sbin/inContainerMountGlusterShare.sh
.
At the end of the day, it’s really plain old Unix standards. The only challenge comes from the use of docker and/or
Kubernetes which requires to play with docker mounts a little.
Just look at eskimo pre-packaged services to see examples.
The setup process implemented as a standard in the setup.sh
script has three different stages:
-
The container instantiation from the pre-packaged image performed from outside the container
-
The software component setup performed from inside the container
-
The registration of the service to SystemD or Kubernetes
-
-
The software component configuration applied at runtime, i.e. at the time the container starts, re-applied everytime.
The fourth phase is most of the time required to apply configurations depending on environment dynamically at startup
time and not statically at setup time.
The goal is to address situations where, for instance, master services are moved to another node (native deployment) or
moved around by Kubernetes. In this case,
applying the master setup configuration at service startup time instead of statically enables to simply restart a slave
service whenever the master node is moved to another node instead of requiring to entirely re-configure them.
The install and setup process thus typically looks this way:
-
From outside the container:
-
Perform required configurations on host OS (create
/var/lib
subfolder, required system user, etc.) -
Run docker container that will be used to create the set up image
-
Call in container setup script
-
-
From inside the container:
-
Create the in container required folders and system user, etc.
-
Adapt configuration files to eskimo context (static configuration only !)
-
-
At service startup time:
-
Adapt configuration to topology (See Eskimo Topology and dependency management below)
-
Start service
-
And that’s it.
Again, the most essential configuration, the adaptation to the cluster topology is not done statically at container setup time but dynamically at service startup time.
While nothing is really enforced as a requirement by eskimo (aside of SystemD / Kubernetes and the name of the
setup.sh
script, there are some standards that should be followed (illustrated for a service named serviceX
:
-
The "in container" setup script is usually called
inContainerSetupServiceX.sh
-
The script taking care of the dynamic configuration and the starting of the service - the one actually called by systemd upon service startup - is usually called
inContainerStartServiceX.sh
-
The systemd system configuration file is usually limited to stopping and starting the docker container
-
The Kubernetes deployment file usually create a deployment (for replicaset) or a statefulset along with all services required to reach the software component.
Creating the service setup folder and writing the setup.sh
script is unfortunately not sufficient for eskimo to be
able to operate the service.
A few additional steps are required, most importantly, defining the new service in the configuration file
services.json
.
In order for a service to be understood and operable by eskimo, it needs to be declared in the services configuration
file services.json
.
A service declaration in services.json
for instance for serviceX
would be defined as follows:
services.json
"serviceX" : { "config": { ## [mandatory] giving the column nbr in status table "order": [0-X], ## [optional] whether or not it has to be instaled on every node ## Default value is false.## "mandatory": [true,false], ## [unique] whether the service is a unique service (singpe instance) or multiple "unique": [true,false], ## [unique] whether the service is managed through Kubernetes (true) or natively ## on nodes with SystemD (false) "kubernetes": [true,false], ## [optional] name of the group to associate it in the status table "group" : "{group name}", ## [mandatory] name of the service. must be consistent with service under ## 'service_setup' "name" : "{service name}, ## [mandatory] name of the image. must be consistent with docker image name under ## 'packages_dev' ## Most of the time, this is the same as {service name} "imageName" : "{image name}, ## [mandatory] where to place the service in 'Service Selection Window' "selectionLayout" : { "row" : [1 - X], "col" : [1 - X] }, ## memory to allocate to the service ## (negligible means the service is excluded from the memory allocation policy ## Kubernetes services are accounted specifically: ## - services running on all nodes are account as native services ## - services running as replicaSet are accounted globally and their total ## required memory is divided amongst all nodes. ## ) "memory": "[negligible|small|medium|large|verylarge]", ## [mandatory] The logo to use whenever displaying the service in the UI is ## required ## Use "images/{logo_file_name}" for resources packaged within eskimo web app ## Use "static_images/{logo_file_name}" for resources put in the eskimo ## distribution folder "static_images" ## (static_images is configurable in eskimo.properties with property ## eskimo.externalLogoAndIconFolder) "logo" : "[images|static_images]/{logo_file_name}" ## [mandatory] The icon to use ine the menu for the service ## Use "images/{icon_file_name}" for resources packaged within eskimo web app ## Use "static_images/{icon_file_name}" for resources put in the eskimo ## distribution folder "static_images" ## (static_images is configurable in eskimo.properties with property ## eskimo.externalLogoAndIconFolder) "icon" : "[images|static_images]/{icon_file_name}" # The specific Kubernetes configuration for kubernetes=true services "kubeConfig": { # the resource request to be made by PODs "request": { # The number of CPUs to be allocated to the POD(s) by Kubernetes # Format : X for X cpus, can have decimal values "cpu": "{number of CPU}, # e.g. 0.5 # The amount of RAM to be allocated to the POD(s) by Kubernetes # Format: X[k|m|g|p] where k,m,g,p are multipliers (kilo, mega, etc.) "ram": "{amount of RAM}, # e.g. 1600m } } }, ## [optional] configuration of the serice web console (if anym) "ui": { ## [optional] (A) either URL template should be configured ... "urlTemplate": "http://{NODE_ADDRESS}:{PORT}/", ## [optional] (B) .... or proxy configuration in case the service has ## to be proxied by eskimo "proxyTargetPort" : {target port}, ## [mandatory] the time to wait for the web console to initialize before ## making it available "waitTime": {1000 - X}, ## [mandatory] the name of the menu entry "title" : "{menu name}", ## [mandatory] the role that the logged in user needs to have to be able ## to see and use the service (UI) ## Possible values are : ## - "*" for any role (open access) ## - "ADMIN" to limit usage to administrators ## - "USER" to limit usage to users (makes little sense) "role" : "[*|ADMIN|USER]", ## [optional] the title to use for the link to the service on the status page "statusPageLinktitle" : "{Link Title}", ## [optional] Whether standard rewrite rules need to be applied to this ## service ## (Standard rewrite rules are documented hereunder) ## (default is true) "applyStandardProxyReplacements": [true|false], ## [optional] List of custom rewrite rules for proxying of web consoles "proxyReplacements" : [ ## first rewrite rule. As many as required can be declared { ## [mandatory] Type of rwrite rule. At the moment only PLAIN is supported ## for full text search and replace. ## In the future REGEXP type shall be implemented "type" : "[PLAIN]", ## [optional] a text searched in the URL. this replacement is applied only ## if the text is found in the URL "urlPattern" : "{url_pattern}", ## e.g. controllers.js ## [mandatory] source text to be replaced "source" : "{source_URL}", ## e.g. "/API" ## [mandatory] replacement text "target" : "{proxied_URL}" ## e.g. "/eskimo/kibana/API" } ], ## [optional] List of page scripter ## Page scripts are added to the target resource just aboce the closing 'body' ## tag "pageScripters" : [ { # [mandatory] the target resource where the script should be added "resourceUrl" : "{relative path to target resource}", # [mandatpry] content of the 'script' tag to be added "script": "{javascript script}" } ], ## [optional] list of URL in headers (e.g. for redirects) that should be ## rewritten "urlRewriting" : [ { # [mandatory] the start pattern of the URL to be searched in returned headers "startUrl" : "{searched prefix}" ## e.g. "{APP_ROOT_URL}/history/", # [mandatory] the replacement for that pattern "replacement" : "{replacement}" ## e.g. ## "{APP_ROOT_URL}/spark-history-server/history/" } ] }, ## [optional] array of dependencies that need to be available and configured "dependencies": [ ## first dependency. As many as required can be declared { ## [mandatory] For services not operated by kubernetes, this is ## essential: it defines how the master service is determined. "masterElectionStrategy": "[NONE|FIRST_NODE|SAME_NODE_OR_RANDOM|RANDOM|RANDOM_NODE_AFTER|SAME_NODE|ALl_NODES]" ## the service relating to this dependency "masterService": "{master service name}", ## The number of master expected "numberOfMasters": [1-x], ## whether that dependency is mandatory or not "mandatory": [true|false], ## whether or not the dependent service (parent JSON definition) should be ## restarted in case an operation affects this service "restart": [true|false], } ] ## [optional] array of configuration properties that should be editable using the ## Eskimo UI. These configuration properties are injected "editableConfigurations": [ ## first editable configuration. As many as required can be declared { ## the name of the configuration file to search for in the software ## installation directory (and sub-folders) "filename": "{configuration file name}", ## e.g. "server.properties" ## the name of the service installation folder under /usr/local/lib ## (eskimo standard installation path) "filesystemService": "{folder name}", ## e.g. "kafka" ## the type of the property syntax ## - "variable" for a simple approach where a variable declaration of the ## expected format is searched for ## - "regex" for a more advanced approach where the configuration is searched ## and replaces using the regex given in format "propertyType": "variable", ## The format of the property definition in the configuration file ## Supported formats are: ## - "{name}: {value}" or ## - "{name}={value}" or ## - "{name} = s{value} or" ## - "REXG with {name} and {value} as placeholders" "propertyFormat": "property format", ## e.g. "{name}={value}" ## The prefix to use in the configuration file for comments "commentPrefix": "#", ## The list of properties to be editable by administrators using the eskimo UI "properties": [ ## first property. As many as required can be declared { ## name of the property "name": "{property name}", ## e.g. "num.network.threads" ## the description to show in the UI "comment": "{property description}", ## the default value to use if undefined by administrators "defaultValue": "{default property value}" ## e.g. "3" } ] } ], ## [optional] array of custom commands that are made available from the context ## menu on the System Status Page (when clicking on services status (OK/KO/etc.) "commands" : [ { ## ID of the command. Needs to be a string with only [a-zA-Z_] "id" : "{command_id}", ## e.g. "show_log" ## Name of the command. This name is displayed in the menu "name" : "{command_name}", ## e.g. "Show Logs" ## The System command to be called on the node running the service "command": "{system_command}", ## e.g. "cat /var/log/ntp/ntp.log" ## The font-awesome icon to be displayed in the menu "icon": "{fa-icon}" ## e.g. "fa-file" } ], ## Additional environment information to be generated in eskimo_topology.sh ## This can contain multiple values, all possibilities are listed underneath as ## example "additionalEnvironment": { # Create an env var that lists all nodes where serviceX is installed "ALL_NODES_LIST_serviceX", # Create a env var that gives the number for this service, in a consistent and # persistent way (can be 0 or 1 based "SERVICE_NUMBER_[0|1]_BASED", # Give in evnv var the context path under which the eskimo Wen Use Interface is # deployed "CONTEXT_PATH" } }
(Bear in mind that since json actually doesn’t support such thing as comments, the example above is actually not a valid JSON snippet - comments starting with '##' would need to be removed.)
Everything is pretty straightforward and one should really look at the services pre-packaged within eskimo to get inspiration when designing a new service to be operated by eskimo.
As stated above, the most essential configuration property in a service definition is the masterElectionStrategy
of a dependency.
The whole master / slave topology management logic as well as the whole dependencies framework of eskimo relies on it.
This is especially important for non-kubernetes services since most of the time the notion of "master" (in the eskimo sense) is replaced by the usage of a kubernetes service to reach the software component deployed on Kubernetes.
Let’s start by introducing what are the supported values for this masterElectionStrategy
property:
-
NONE
: This is the simplest case. This enables a service to define as requiring another service without bothering where it should be installed. It just has to be present somewhere on the cluster and the first service doesn’t care where.
It however enforces the presence of that dependency service somewhere and refuses to validate the installation if the dependency is not available somewhere on the eskimo nodes cluster. -
FIRST_NODE
: This is used to define a simple dependency on another service. In addition,FIRST_NODE
indicates that the service where it is declared wants to know about at least one node where the dependency service is available.
That other node should be the first node found where that dependency service is available.
First node means that the nodes are processed by their order of declaration. The first node than runs the dependency service will be given as dependency to the declaring service. -
SAME_NODE_OR_RANDOM
: This is used to define a simple dependency on another service. In details,SAME_NODE_OR_RANDOM
indicates that the first service wants to know about at least one node where the dependency service is available.
In the case ofSAME_NODE_OR_RANDOM
, eskimo tries to find the dependency service on the very same node than the one running the declaring service if that dependent service is available on that very same node.
If no instance of the dependency service is not running on the very same node, then any other random node running the dependency service is used as dependency. (This is only possible for native nodes SystemD services) -
RANDOM
: This is used to define a simple dependency on another service. In details,RANDOM
indicates that the first service wants to know about at least one node where the dependency service is available. That other node can be any other node of the cluster where the dependency service is installed. -
RANDOM_NODE_AFTER
: This is used to define a simple dependency on another service. In details,RANDOM_NODE_AFTER
indicates that the first service wants to know about at least one node where that dependency service is available.
That other node should be any node of the cluster where the second service is installed yet with a node number (internal eskimo node declaration order) greater than the current node where the first service is installed.
This is useful to define a chain of dependencies where every node instance depends on another node instance in a circular way - pretty nifty for instance for elasticsearch discovery configuration. (This is only possible for native nodes SystemD services) -
SAME_NODE
: This means that the dependency service is expected to be available on the same node than the first service, otherwise eskimo will report an error during service installation. (This is only possible for native nodes SystemD services) -
ALL_NODES
: this meands that every service defining this dependency will receive the full list of nodes running the master service in an topology variable.
The best way to understand this is to look at the examples in eskimo pre-packaged services declared in the bundled
services.json
.
For instance:
-
Etcd wants to use the co-located instance of gluster. Since gluster is expected to be available from all nodes of the eskimo cluster, this dependency is simply expressed as:
"dependencies": [ { "masterElectionStrategy": "SAME_NODE", "masterService": "gluster", "numberOfMasters": 1, "mandatory": false, "restart": true } ]
-
kube-slave services needs to reach the first node where kube-master is available (only one in Eskimo Community edition in anyway), so the dependency is defined as follows:
"dependencies": [ { "masterElectionStrategy": "FIRST_NODE", "masterService": "kube-master", "numberOfMasters": 1, "mandatory": true, "restart": true },
-
kafka-manager needs to reach any random instance of kafka running on the cluster, so the dependency is expressed as simply as:
"dependencies": [ { "masterElectionStrategy": "FIRST_NODE", "masterService": "zookeeper", "numberOfMasters": 1, "mandatory": true, "restart": true }, { "masterElectionStrategy": "RANDOM", "masterService": "kafka", "numberOfMasters": 1, "mandatory": true, "restart": false }
Look at other examples to get inspired.
Another pretty important property in a service configuration in services.json
is the memory consumption property:
memory
.
This setting only applies to native (or SystemD) services, marathon services memory is defined in another way.
The possible values for that property are as follows :
-
negligible
: the service is not accounted in memory allocation -
small
: the service gets a single share of memory -
medium
: the service gets two shares of memory -
large
: the service gets three shares of memory
The system then works by computing the sum of shares for all nodes and then allocating the available memory on the node
to every service by dividing it amongst shares and allocating the corresponding portion of memory to every service.
Of course, the system first removes from the available memory a significant portion to ensure some room for kernel and
filesystem cache.
Also, Kubernetes servcies deployed as statefulSet on ever node are accounted on every node; while unique kubernetes
services are accounted only partially, with a reat corresponding to the amount of memory it would take divided by the
number of nodes.
Since unique Kubernetes services are spread among nodes, this works well in practice and is realistic.
Let’s imagine the following services installed on a cluster node, along with their memory setting:
Native services :
-
ntp - negligible
-
prometheus - negligible
-
gluster - negligible
-
zookeeper - small
Kubernetes services :
-
elasticsearch - large
-
logstash - small
-
kafka - large
-
kibana - medium
-
zeppelin - very large
The following table gives various examples in terms of memory allocation for three different total RAM size values on the
cluster node running these services.
The different columns gives how much memory is allocated to the different services in the different rows for various
size of total RAM.
Node total RAM | Nbr. parts | 8 Gb node | 16 Gb node | 20 Gb node |
---|---|---|---|---|
ntp |
0 |
- |
- |
- |
prometheus |
0 |
- |
- |
- |
gluster |
0 |
- |
- |
- |
zookeeper |
1 |
525m |
1125m |
1425m |
elasticsearch |
3 |
1575m |
3375m |
4275m |
logstash |
1 |
525m |
1125m |
1425m |
kafka |
3 |
1575m |
3375m |
4275m |
kibana |
2/3* |
350m |
750m |
950m |
zeppelin |
5/3* |
875m |
1875m |
2375m |
Filesystem cache reserve |
3 |
1575m |
3375m |
4275m |
OS reserve |
- |
1000m |
1000m |
1000m |
(*For 3 nodes)
The services Kibana and Zeppelin are unique services running on Kubernetes, this example above accounts that there would be 3 nodes in the clzster, hence their memory share is split by 3 on each node.
The memory configures above is injected directly in the services themselves, without any consideration for the memory requested by the corresponding Kubernetes POD. One should take that into account and declare a comparable amount of memory when declaring the requested POD memory for Kubernetes Services. In fact, one should declare a little more memory as Kubernetes requested memory for POD accounting for overhead.
Every time the cluster nodes / services configuration is changed. Eskimo will verify the global services topology and generate for every node of the cluster a "topology definition file".
That topology definition file defines all the dependencies and where to find them (using the notion of MASTER) for every service running on every node.
The "topology definition file" can be fond on nodes in /etc/eskimo_topology.sh
.
Many services managed by eskimo have web consoles used to administer them, such as mesos-agents, mesos-master, kafka-manager, etc. Some are even only web consoles used to administer other services or perform Data Science tasks, such as Kibana, Zeppelin or EGMI, etc.
Eskimo supports two modes for providing these web consoles in its own UI as presented in configuration above:
-
(A) Configuration of an
urlTemplate
which is used by eskimo to show an iframe displaying directly the web console from the node on which it is installed. This method is supported for backwards compatibility purpose but it is not recommended -
(B) Configuration of a
proxyTargetPort
for full proxying and tunnelling (using SSH) of the whole HTTP flow to the web console using eskimo embedded proxying and tunneling feature. This is the recommended way and this is the way used by all eskimo pre-packaged services and web consoles.
Proxying works as explained in the User Guide in the section "SSH Tunnelling".
Proxying is however a little more complicated to set up since eskimo needs to perform a lot of rewriting on the text resources (javascript, html and json) served by the proxied web console to rewrite served URLs to make them pass through the proxy.
Eskimo provides a powerful rewrite engine that one can use to implement the rewrite rules defined in the configuration as presented above.
Proxying web consoles HTTP flow means that a lot of the text resources served by the individual target web consoles need to be processed in such a way that absolute URLs are rewritten. This is unfortunately tricky and many different situations can occur, from URL build dynamically in javascript to static resources URLs in CSS files for instance.
An eskimo service developer needs to analyze the application, debug it and understand every pattern that needs to be replaced and define a rewrite rule for each of them.
A set of standard rewrite rules are implemented once and for all by the eskimo HTTP proxy for all services. By default
these standard rewrite rules are enabled for a service unless the service config declares
"applyStandardProxyReplacements": false
in which case they are not applied to that specific service.
This is useful when a standard rule is actually harming a specific web console behaviour.
The standard rewrite rules are as follows:
{ "type" : "PLAIN", "source" : "src=\"/", "target" : "src=\"/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "action=\"/", "target" : "action=\"/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "href=\"/", "target" : "href=\"/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "href='/", "target" : "href='/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "url(\"/", "target" : "url(\"/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "url('/", "target" : "url('/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "url(/", "target" : "url(/{PREFIX_PATH}/" }, { "type" : "PLAIN", "source" : "/api/v1", "target" : "/{PREFIX_PATH}/api/v1" }, { "type" : "PLAIN", "source" : "\"/static/", "target" : "\"/{PREFIX_PATH}/static/" },
In addition to the standard rewrite rules - that can be used or not by a service web console - an eskimo service
developer can define as many custom rewrite rules as he wants in the service configuration in services.json
as
presented above.
Some patterns can be used in both the source
and target
strings that will be replaced by the framework before they
are searched, respectively injected, in the text stream:
-
CONTEXT_PATH
will be resolved by the context root at which the eskimo web application is deployed, such as for instanceeskimo
-
PREFIX_PATH
will be resolved by the specific context path of the service web console context, such as for instance for kibana{CONTEXT_PATH}/kibana
, e.g.eskimo/kibana
orkibana
if no context root is used.
Eskimo is Copyright 2019 - 2022 eskimo.sh - All rights reserved.
Author : http://www.eskimo.sh
Eskimo is available under a dual licensing model : commercial and GNU AGPL.
If you did not acquire a commercial licence for Eskimo, you can still use it and consider it free software under the
terms of the GNU Affero Public License. You can redistribute it and/or modify it under the terms of the GNU Affero
Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option)
any later version.
Compliance to each and every aspect of the GNU Affero Public License is mandatory for users who did no acquire a
commercial license.
Eskimo is distributed as a free software under GNU AGPL in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero Public License for more details.
You should have received a copy of the GNU Affero Public License along with Eskimo. If not, see https://www.gnu.org/licenses/ or write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, 02110-1301 USA.
You can be released from the requirements of the license by purchasing a commercial license. Buying such a commercial license is mandatory as soon as :
-
you develop activities involving Eskimo without disclosing the source code of your own product, software, platform, use cases or scripts.
-
you deploy eskimo as part of a commercial product, platform or software.
For more information, please contact eskimo.sh at https://www.eskimo.sh
The above copyright notice and this licensing notice shall be included in all copies or substantial portions of the Software.