-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ELK images for 8.16.0 #17887
Closed
navyau09
wants to merge
1
commit into
docker-library:master
from
elastic:update-official-images-8.16.0
Closed
Update ELK images for 8.16.0 #17887
navyau09
wants to merge
1
commit into
docker-library:master
from
elastic:update-official-images-8.16.0
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Diff for 21c0657:diff --git a/_bashbrew-cat b/_bashbrew-cat
index 4ee3252..04b0311 100644
--- a/_bashbrew-cat
+++ b/_bashbrew-cat
@@ -7,7 +7,7 @@ Builder: buildkit
Tags: 7.17.25
Architectures: amd64, arm64v8
GitFetch: refs/heads/7.17
-GitCommit: 9a2cb64607abe3574f8f1d0fa8b9680798f0e6d9
+GitCommit: fee99c955058383955d074689a499ffa0ebcf245
Tags: 8.15.3
Architectures: amd64, arm64v8
@@ -24,7 +24,7 @@ Builder: buildkit
Tags: 7.17.25
Architectures: amd64, arm64v8
GitFetch: refs/heads/7.17
-GitCommit: 9a2cb64607abe3574f8f1d0fa8b9680798f0e6d9
+GitCommit: fee99c955058383955d074689a499ffa0ebcf245
Tags: 8.15.3
Architectures: amd64, arm64v8
@@ -41,7 +41,7 @@ Builder: buildkit
Tags: 7.17.25
Architectures: amd64, arm64v8
GitFetch: refs/heads/7.17
-GitCommit: 9a2cb64607abe3574f8f1d0fa8b9680798f0e6d9
+GitCommit: fee99c955058383955d074689a499ffa0ebcf245
Tags: 8.15.3
Architectures: amd64, arm64v8
diff --git a/elasticsearch_7.17.25/Dockerfile b/elasticsearch_7.17.25/Dockerfile
index 4f493fe..f81ebaf 100644
--- a/elasticsearch_7.17.25/Dockerfile
+++ b/elasticsearch_7.17.25/Dockerfile
@@ -5,9 +5,10 @@
################################################################################
################################################################################
-# Build stage 0 `builder`:
+# Build stage 1 `builder`:
# Extract Elasticsearch artifact
################################################################################
+
FROM ubuntu:20.04 AS builder
# Install required packages to extract the Elasticsearch distribution
@@ -19,13 +20,14 @@ RUN for iter in 1 2 3 4 5 6 7 8 9 10; do \
done; \
exit $exit_code
-# `tini` is a tiny but valid init for containers. This is used to cleanly
-# control how ES and any child processes are shut down.
-#
-# The tini GitHub page gives instructions for verifying the binary using
-# gpg, but the keyservers are slow to return the key and this can fail the
-# build. Instead, we check the binary against the published checksum.
-RUN set -eux ; \
+ # `tini` is a tiny but valid init for containers. This is used to cleanly
+ # control how ES and any child processes are shut down.
+ # For wolfi we pick it from the blessed wolfi package registry.
+ #
+ # The tini GitHub page gives instructions for verifying the binary using
+ # gpg, but the keyservers are slow to return the key and this can fail the
+ # build. Instead, we check the binary against the published checksum.
+ RUN set -eux ; \
tini_bin="" ; \
case "$(arch)" in \
aarch64) tini_bin='tini-arm64' ;; \
@@ -42,7 +44,7 @@ RUN set -eux ; \
RUN mkdir /usr/share/elasticsearch
WORKDIR /usr/share/elasticsearch
-RUN curl --retry 10 -S -L --output /tmp/elasticsearch.tar.gz https://artifacts-no-kpi.elastic.co/downloads/elasticsearch/elasticsearch-7.17.25-linux-$(arch).tar.gz
+RUN curl --retry 10 -S -L --output /tmp/elasticsearch.tar.gz https://artifacts-no-kpi.elastic.co/downloads/elasticsearch/elasticsearch-8.16.0-linux-$(arch).tar.gz
RUN tar -zxf /tmp/elasticsearch.tar.gz --strip-components=1
@@ -71,9 +73,9 @@ RUN sed -i -e 's/ES_DISTRIBUTION_TYPE=tar/ES_DISTRIBUTION_TYPE=docker/' bin/elas
find config -type f -exec chmod 0664 {} +
################################################################################
-# Build stage 1 (the actual Elasticsearch image):
+# Build stage 2 (the actual Elasticsearch image):
#
-# Copy elasticsearch from stage 0
+# Copy elasticsearch from stage 1
# Add entrypoint
################################################################################
@@ -107,7 +110,7 @@ COPY --from=builder --chown=0:0 /usr/share/elasticsearch /usr/share/elasticsearc
COPY --from=builder --chown=0:0 /bin/tini /bin/tini
ENV PATH /usr/share/elasticsearch/bin:$PATH
-
+ENV SHELL /bin/bash
COPY bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# 1. Sync the user and group permissions of /etc/passwd
@@ -134,25 +137,25 @@ RUN /etc/ca-certificates/update.d/docker-openjdk
EXPOSE 9200 9300
-LABEL org.label-schema.build-date="2024-10-16T22:06:36.904732810Z" \
+LABEL org.label-schema.build-date="2024-11-08T10:05:56.292914697Z" \
org.label-schema.license="Elastic-License-2.0" \
org.label-schema.name="Elasticsearch" \
org.label-schema.schema-version="1.0" \
org.label-schema.url="https://www.elastic.co/products/elasticsearch" \
org.label-schema.usage="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
- org.label-schema.vcs-ref="f9b6b57d1d0f76e2d14291c04fb50abeb642cfbf" \
+ org.label-schema.vcs-ref="12ff76a92922609df4aba61a368e7adf65589749" \
org.label-schema.vcs-url="https://github.com/elastic/elasticsearch" \
org.label-schema.vendor="Elastic" \
- org.label-schema.version="7.17.25" \
- org.opencontainers.image.created="2024-10-16T22:06:36.904732810Z" \
+ org.label-schema.version="8.16.0" \
+ org.opencontainers.image.created="2024-11-08T10:05:56.292914697Z" \
org.opencontainers.image.documentation="https://www.elastic.co/guide/en/elasticsearch/reference/index.html" \
org.opencontainers.image.licenses="Elastic-License-2.0" \
- org.opencontainers.image.revision="f9b6b57d1d0f76e2d14291c04fb50abeb642cfbf" \
+ org.opencontainers.image.revision="12ff76a92922609df4aba61a368e7adf65589749" \
org.opencontainers.image.source="https://github.com/elastic/elasticsearch" \
org.opencontainers.image.title="Elasticsearch" \
org.opencontainers.image.url="https://www.elastic.co/products/elasticsearch" \
org.opencontainers.image.vendor="Elastic" \
- org.opencontainers.image.version="7.17.25"
+ org.opencontainers.image.version="8.16.0"
# Our actual entrypoint is `tini`, a minimal but functional init program. It
# calls the entrypoint we provide, while correctly forwarding signals.
@@ -160,6 +163,8 @@ ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
# Dummy overridable parameter parsed by entrypoint
CMD ["eswrapper"]
+USER 1000:0
+
################################################################################
# End of multi-stage Dockerfile
################################################################################
diff --git a/elasticsearch_7.17.25/bin/docker-entrypoint.sh b/elasticsearch_7.17.25/bin/docker-entrypoint.sh
index eeb9832..d7b41b8 100755
--- a/elasticsearch_7.17.25/bin/docker-entrypoint.sh
+++ b/elasticsearch_7.17.25/bin/docker-entrypoint.sh
@@ -4,38 +4,22 @@ set -e
# Files created by Elasticsearch should always be group writable too
umask 0002
-run_as_other_user_if_needed() {
- if [[ "$(id -u)" == "0" ]]; then
- # If running as root, drop to specified UID and run command
- exec chroot --userspec=1000:0 / "${@}"
- else
- # Either we are running in Openshift with random uid and are a member of the root group
- # or with a custom --user
- exec "${@}"
- fi
-}
-
# Allow user specify custom CMD, maybe bin/elasticsearch itself
# for example to directly specify `-E` style parameters for elasticsearch on k8s
# or simply to run /bin/bash to check the image
-if [[ "$1" != "eswrapper" ]]; then
- if [[ "$(id -u)" == "0" && $(basename "$1") == "elasticsearch" ]]; then
- # centos:7 chroot doesn't have the `--skip-chdir` option and
- # changes our CWD.
- # Rewrite CMD args to replace $1 with `elasticsearch` explicitly,
+if [[ "$1" == "eswrapper" || $(basename "$1") == "elasticsearch" ]]; then
+ # Rewrite CMD args to remove the explicit command,
# so that we are backwards compatible with the docs
- # from the previous Elasticsearch versions<6
- # and configuration option D:
+ # from the previous Elasticsearch versions < 6
+ # and configuration option:
# https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html#_d_override_the_image_8217_s_default_ulink_url_https_docs_docker_com_engine_reference_run_cmd_default_command_or_options_cmd_ulink
# Without this, user could specify `elasticsearch -E x.y=z` but
- # `bin/elasticsearch -E x.y=z` would not work.
- set -- "elasticsearch" "${@:2}"
- # Use chroot to switch to UID 1000 / GID 0
- exec chroot --userspec=1000:0 / "$@"
- else
- # User probably wants to run something else, like /bin/bash, with another uid forced (Openshift?)
+ # `bin/elasticsearch -E x.y=z` would not work. In any case,
+ # we want to continue through this script, and not exec early.
+ set -- "${@:2}"
+else
+ # Run whatever command the user wanted
exec "$@"
- fi
fi
# Allow environment variables to be set by creating a file with the
@@ -56,30 +40,23 @@ if [[ -f bin/elasticsearch-users ]]; then
# enabled, but we have no way of knowing which node we are yet. We'll just
# honor the variable if it's present.
if [[ -n "$ELASTIC_PASSWORD" ]]; then
- [[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (run_as_other_user_if_needed elasticsearch-keystore create)
- if ! (run_as_other_user_if_needed elasticsearch-keystore has-passwd --silent) ; then
+ [[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (elasticsearch-keystore create)
+ if ! (elasticsearch-keystore has-passwd --silent) ; then
# keystore is unencrypted
- if ! (run_as_other_user_if_needed elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
- (run_as_other_user_if_needed echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
+ if ! (elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
+ (echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
fi
else
# keystore requires password
- if ! (run_as_other_user_if_needed echo "$KEYSTORE_PASSWORD" \
+ if ! (echo "$KEYSTORE_PASSWORD" \
| elasticsearch-keystore list | grep -q '^bootstrap.password$') ; then
COMMANDS="$(printf "%s\n%s" "$KEYSTORE_PASSWORD" "$ELASTIC_PASSWORD")"
- (run_as_other_user_if_needed echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
+ (echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
fi
fi
fi
fi
-if [[ "$(id -u)" == "0" ]]; then
- # If requested and running as root, mutate the ownership of bind-mounts
- if [[ -n "$TAKE_FILE_OWNERSHIP" ]]; then
- chown -R 1000:0 /usr/share/elasticsearch/{data,logs}
- fi
-fi
-
if [[ -n "$ES_LOG_STYLE" ]]; then
case "$ES_LOG_STYLE" in
console)
@@ -96,6 +73,12 @@ if [[ -n "$ES_LOG_STYLE" ]]; then
esac
fi
+if [[ -n "$ENROLLMENT_TOKEN" ]]; then
+ POSITIONAL_PARAMETERS="--enrollment-token $ENROLLMENT_TOKEN"
+else
+ POSITIONAL_PARAMETERS=""
+fi
+
# Signal forwarding and child reaping is handled by `tini`, which is the
# actual entrypoint of the container
-run_as_other_user_if_needed /usr/share/elasticsearch/bin/elasticsearch <<<"$KEYSTORE_PASSWORD"
+exec /usr/share/elasticsearch/bin/elasticsearch "$@" $POSITIONAL_PARAMETERS <<<"$KEYSTORE_PASSWORD"
diff --git a/elasticsearch_7.17.25/config/log4j2.properties b/elasticsearch_7.17.25/config/log4j2.properties
index b46562d..c0d67c8 100644
--- a/elasticsearch_7.17.25/config/log4j2.properties
+++ b/elasticsearch_7.17.25/config/log4j2.properties
@@ -3,8 +3,8 @@ status = error
######## Server JSON ############################
appender.rolling.type = Console
appender.rolling.name = rolling
-appender.rolling.layout.type = ESJsonLayout
-appender.rolling.layout.type_name = server
+appender.rolling.layout.type = ECSJsonLayout
+appender.rolling.layout.dataset = elasticsearch.server
################################################
@@ -16,16 +16,15 @@ rootLogger.appenderRef.rolling.ref = rolling
######## Deprecation JSON #######################
appender.deprecation_rolling.type = Console
appender.deprecation_rolling.name = deprecation_rolling
-appender.deprecation_rolling.layout.type = ESJsonLayout
-appender.deprecation_rolling.layout.type_name = deprecation.elasticsearch
-appender.deprecation_rolling.layout.esmessagefields=x-opaque-id,key,category,elasticsearch.elastic_product_origin
+appender.deprecation_rolling.layout.type = ECSJsonLayout
+# Intentionally follows a different pattern to above
+appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################
-#################################################
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
@@ -35,9 +34,8 @@ logger.deprecation.additivity = false
######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = Console
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
-appender.index_search_slowlog_rolling.layout.type = ESJsonLayout
-appender.index_search_slowlog_rolling.layout.type_name = index_search_slowlog
-appender.index_search_slowlog_rolling.layout.esmessagefields=message,took,took_millis,total_hits,types,stats,search_type,total_shards,source,id
+appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
+appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog
#################################################
@@ -50,11 +48,8 @@ logger.index_search_slowlog_rolling.additivity = false
######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = Console
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
-appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
-appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog
-appender.index_indexing_slowlog_rolling.layout.esmessagefields=message,took,took_millis,doc_type,id,routing,source
-
-#################################################
+appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
+appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog
#################################################
@@ -63,12 +58,41 @@ logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
+logger.org_apache_pdfbox.name = org.apache.pdfbox
+logger.org_apache_pdfbox.level = off
+
+logger.org_apache_poi.name = org.apache.poi
+logger.org_apache_poi.level = off
+
+logger.org_apache_fontbox.name = org.apache.fontbox
+logger.org_apache_fontbox.level = off
+
+logger.org_apache_xmlbeans.name = org.apache.xmlbeans
+logger.org_apache_xmlbeans.level = off
+
+logger.com_amazonaws.name = com.amazonaws
+logger.com_amazonaws.level = warn
+
+logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name = com.amazonaws.jmx.SdkMBeanRegistrySupport
+logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error
+
+logger.com_amazonaws_metrics_AwsSdkMetrics.name = com.amazonaws.metrics.AwsSdkMetrics
+logger.com_amazonaws_metrics_AwsSdkMetrics.level = error
+
+logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name = com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
+logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level = error
+
+logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name = com.amazonaws.services.s3.internal.UseArnRegionResolver
+logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error
+
appender.audit_rolling.type = Console
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
"type":"audit", \
"timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
+ %varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
+ %varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
%varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
%varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
%varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
@@ -80,16 +104,21 @@ appender.audit_rolling.layout.pattern = {\
%varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
%varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
%varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
+ %varsNotEmpty{, "user.realm_domain":"%enc{%map{user.realm_domain}}{JSON}"}\
%varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
+ %varsNotEmpty{, "user.run_by.realm_domain":"%enc{%map{user.run_by.realm_domain}}{JSON}"}\
%varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
+ %varsNotEmpty{, "user.run_as.realm_domain":"%enc{%map{user.run_as.realm_domain}}{JSON}"}\
%varsNotEmpty{, "user.roles":%map{user.roles}}\
%varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
%varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
+ %varsNotEmpty{, "cross_cluster_access":%map{cross_cluster_access}}\
%varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
%varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
%varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
+ %varsNotEmpty{, "realm_domain":"%enc{%map{realm_domain}}{JSON}"}\
%varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
%varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
%varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
@@ -120,16 +149,21 @@ appender.audit_rolling.layout.pattern = {\
# "user.run_by.name" the original authenticated subject name that is impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
+# "user.realm_domain" if "user.realm" is under a domain, this is the name of the domain
# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
+# "user.run_by.realm_domain" if "user.run_by.realm" is under a domain, this is the name of the domain
# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
+# "user.run_as.realm_domain" if "user.run_as.realm" is under a domain, this is the name of the domain
# "user.roles" the roles array of the user; these are the roles that are granting privileges
# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
+# "cross_cluster_access" this field is present if and only if the associated authentication occurred cross cluster
# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
+# "realm_domain" if "realm" is under a domain, this is the name of the domain
# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
diff --git a/kibana_7.17.25/Dockerfile b/kibana_7.17.25/Dockerfile
index c9aea95..b6f971b 100644
--- a/kibana_7.17.25/Dockerfile
+++ b/kibana_7.17.25/Dockerfile
@@ -16,41 +16,21 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl
RUN cd /tmp && \
curl --retry 8 -s -L \
--output kibana.tar.gz \
- https://artifacts.elastic.co/downloads/kibana/kibana-7.17.25-linux-$(arch).tar.gz && \
+ https://artifacts.elastic.co/downloads/kibana/kibana-8.16.0-linux-$(arch).tar.gz && \
cd -
-
RUN mkdir /usr/share/kibana
WORKDIR /usr/share/kibana
-RUN tar --strip-components=1 -zxf /tmp/kibana.tar.gz
+RUN tar \
+ --strip-components=1 \
+ -zxf /tmp/kibana.tar.gz
+
# Ensure that group permissions are the same as user permissions.
# This will help when relying on GID-0 to run Kibana, rather than UID-1000.
# OpenShift does this, for example.
# REF: https://docs.openshift.org/latest/creating_images/guidelines.html
RUN chmod -R g=u /usr/share/kibana
-
-################################################################################
-# Build stage 1 (the actual Kibana image):
-#
-# Copy kibana from stage 0
-# Add entrypoint
-################################################################################
-FROM ubuntu:20.04
-EXPOSE 5601
-
-RUN for iter in {1..10}; do \
- export DEBIAN_FRONTEND=noninteractive && \
- apt-get update && \
- apt-get upgrade -y && \
- apt-get install -y --no-install-recommends \
- fontconfig fonts-liberation libnss3 libfontconfig1 ca-certificates curl && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/* && exit_code=0 && break || exit_code=$? && echo "apt-get error: retry $iter in 10s" && \
- sleep 10; \
- done; \
- (exit $exit_code)
-
# Add an init process, check the checksum to make sure it's a match
RUN set -e ; \
TINI_BIN="" ; \
@@ -70,14 +50,38 @@ RUN set -e ; \
rm "${TINI_BIN}.sha256sum" ; \
mv "${TINI_BIN}" /bin/tini ; \
chmod +x /bin/tini
+RUN mkdir -p /usr/share/fonts/local && \
+ curl --retry 8 -S -L -o /usr/share/fonts/local/NotoSansCJK-Regular.ttc https://github.com/googlefonts/noto-cjk/raw/NotoSansV2.001/NotoSansCJK-Regular.ttc && \
+ echo "5dcd1c336cc9344cb77c03a0cd8982ca8a7dc97d620fd6c9c434e02dcb1ceeb3 /usr/share/fonts/local/NotoSansCJK-Regular.ttc" | sha256sum -c -
-RUN mkdir /usr/share/fonts/local
-RUN curl --retry 8 -S -L -o /usr/share/fonts/local/NotoSansCJK-Regular.ttc https://github.com/googlefonts/noto-cjk/raw/NotoSansV2.001/NotoSansCJK-Regular.ttc
-RUN echo "5dcd1c336cc9344cb77c03a0cd8982ca8a7dc97d620fd6c9c434e02dcb1ceeb3 /usr/share/fonts/local/NotoSansCJK-Regular.ttc" | sha256sum -c -
-RUN fc-cache -v
+
+################################################################################
+# Build stage 1 (the actual Kibana image):
+#
+# Copy kibana from stage 0
+# Add entrypoint
+################################################################################
+FROM ubuntu:20.04
+EXPOSE 5601
+
+RUN for iter in {1..10}; do \
+ export DEBIAN_FRONTEND=noninteractive && \
+ apt-get update && \
+ apt-get upgrade -y && \
+ apt-get install -y --no-install-recommends \
+ fontconfig libnss3 curl ca-certificates && \
+ apt-get clean && \
+ rm -rf /var/lib/apt/lists/* && exit_code=0 && break || exit_code=$? && echo "apt-get error: retry $iter in 10s" && \
+ sleep 10; \
+ done; \
+ (exit $exit_code)
# Bring in Kibana from the initial stage.
COPY --from=builder --chown=1000:0 /usr/share/kibana /usr/share/kibana
+COPY --from=builder --chown=0:0 /bin/tini /bin/tini
+# Load reporting fonts
+COPY --from=builder --chown=0:0 /usr/share/fonts/local/NotoSansCJK-Regular.ttc /usr/share/fonts/local/NotoSansCJK-Regular.ttc
+RUN fc-cache -v
WORKDIR /usr/share/kibana
RUN ln -s /usr/share/kibana /opt/kibana
@@ -104,25 +108,25 @@ RUN groupadd --gid 1000 kibana && \
--home-dir /usr/share/kibana --no-create-home \
kibana
-LABEL org.label-schema.build-date="2024-10-16T11:09:06.408Z" \
+LABEL org.label-schema.build-date="2024-11-07T12:08:23.851Z" \
org.label-schema.license="Elastic License" \
org.label-schema.name="Kibana" \
org.label-schema.schema-version="1.0" \
org.label-schema.url="https://www.elastic.co/products/kibana" \
org.label-schema.usage="https://www.elastic.co/guide/en/kibana/reference/index.html" \
- org.label-schema.vcs-ref="427e9521131a6f5f96fe79fb6d6eca013a5f89f3" \
+ org.label-schema.vcs-ref="a8a07dfc586d78b8f4b7997b00e126363d68c043" \
org.label-schema.vcs-url="https://github.com/elastic/kibana" \
org.label-schema.vendor="Elastic" \
- org.label-schema.version="7.17.25" \
- org.opencontainers.image.created="2024-10-16T11:09:06.408Z" \
+ org.label-schema.version="8.16.0" \
+ org.opencontainers.image.created="2024-11-07T12:08:23.851Z" \
org.opencontainers.image.documentation="https://www.elastic.co/guide/en/kibana/reference/index.html" \
org.opencontainers.image.licenses="Elastic License" \
- org.opencontainers.image.revision="427e9521131a6f5f96fe79fb6d6eca013a5f89f3" \
+ org.opencontainers.image.revision="a8a07dfc586d78b8f4b7997b00e126363d68c043" \
org.opencontainers.image.source="https://github.com/elastic/kibana" \
org.opencontainers.image.title="Kibana" \
org.opencontainers.image.url="https://www.elastic.co/products/kibana" \
org.opencontainers.image.vendor="Elastic" \
- org.opencontainers.image.version="7.17.25"
+ org.opencontainers.image.version="8.16.0"
ENTRYPOINT ["/bin/tini", "--"]
@@ -131,4 +135,4 @@ ENTRYPOINT ["/bin/tini", "--"]
CMD ["/usr/local/bin/kibana-docker"]
-USER kibana
+USER 1000
diff --git a/kibana_7.17.25/bin/kibana-docker b/kibana_7.17.25/bin/kibana-docker
index 08627dd..3c1e7eb 100755
--- a/kibana_7.17.25/bin/kibana-docker
+++ b/kibana_7.17.25/bin/kibana-docker
@@ -23,14 +23,11 @@ kibana_vars=(
apm_oss.sourcemapIndices
apm_oss.spanIndices
apm_oss.transactionIndices
- console.enabled
console.proxyConfig
console.proxyFilter
- cpu.cgroup.path.override
- cpuacct.cgroup.path.override
- csp.rules
csp.strict
csp.warnLegacyBrowsers
+ csp.disableUnsafeEval
csp.script_src
csp.worker_src
csp.style_src
@@ -42,13 +39,32 @@ kibana_vars=(
csp.frame_ancestors
csp.report_uri
csp.report_to
+ csp.report_only.form_action
+ permissionsPolicy.report_to
data.autocomplete.valueSuggestions.terminateAfter
data.autocomplete.valueSuggestions.timeout
+ data.search.asyncSearch.waitForCompletion
+ data.search.asyncSearch.keepAlive
+ data.search.asyncSearch.batchedReduceSize
+ data.search.asyncSearch.pollInterval
+ data.search.sessions.defaultExpiration
+ data.search.sessions.enabled
+ data.search.sessions.maxUpdateRetries
+ data.search.sessions.notTouchedInProgressTimeout
+ data.search.sessions.notTouchedTimeout
+ data.search.sessions.pageSize
+ data.search.sessions.trackingInterval
+ unifiedSearch.autocomplete.valueSuggestions.terminateAfter
+ unifiedSearch.autocomplete.valueSuggestions.timeout
+ unifiedSearch.autocomplete.querySuggestions.enabled
+ unifiedSearch.autocomplete.valueSuggestions.enabled
+ unifiedSearch.autocomplete.valueSuggestions.tiers
elasticsearch.customHeaders
elasticsearch.hosts
elasticsearch.logQueries
elasticsearch.password
elasticsearch.pingTimeout
+ elasticsearch.publicBaseUrl
elasticsearch.requestHeadersWhitelist
elasticsearch.requestTimeout
elasticsearch.serviceAccountToken
@@ -69,39 +85,26 @@ kibana_vars=(
elasticsearch.username
enterpriseSearch.accessCheckTimeout
enterpriseSearch.accessCheckTimeoutWarning
- enterpriseSearch.enabled
enterpriseSearch.host
externalUrl.policy
i18n.locale
- interpreter.enableInVisualize
+ interactiveSetup.enabled
+ interactiveSetup.connectionCheck.interval
kibana.autocompleteTerminateAfter
kibana.autocompleteTimeout
- kibana.defaultAppId
kibana.index
logging.appenders
logging.appenders.console
logging.appenders.file
- logging.dest
- logging.json
logging.loggers
logging.loggers.appenders
logging.loggers.level
logging.loggers.name
- logging.quiet
logging.root
logging.root.appenders
logging.root.level
- logging.rotate.enabled
- logging.rotate.everyBytes
- logging.rotate.keepFiles
- logging.rotate.pollingInterval
- logging.rotate.usePolling
- logging.silent
- logging.useUTC
- logging.verbose
+ map.emsUrl
map.includeElasticMapsService
- map.proxyElasticMapsServiceInMaps
- map.regionmap
map.tilemap.options.attribution
map.tilemap.options.maxZoom
map.tilemap.options.minZoom
@@ -114,9 +117,9 @@ kibana_vars=(
migrations.scrollDuration
migrations.skip
monitoring.cluster_alerts.email_notifications.email_address
- monitoring.enabled
monitoring.kibana.collection.enabled
monitoring.kibana.collection.interval
+ monitoring.ui.ccs.enabled
monitoring.ui.container.elasticsearch.enabled
monitoring.ui.container.logstash.enabled
monitoring.ui.elasticsearch.hosts
@@ -131,16 +134,20 @@ kibana_vars=(
monitoring.ui.max_bucket_size
monitoring.ui.min_interval_seconds
newsfeed.enabled
+ node.roles
ops.cGroupOverrides.cpuAcctPath
ops.cGroupOverrides.cpuPath
ops.interval
path.data
pid.file
+ profiler.signal
regionmap
savedObjects.maxImportExportSize
savedObjects.maxImportPayloadBytes
+ savedObjects.allowHttpApiAccess
security.showInsecureClusterWarning
server.basePath
+ server.cdn.url
server.compression.enabled
server.compression.referrerWhitelist
server.cors
@@ -151,20 +158,24 @@ kibana_vars=(
server.customResponseHeaders
server.defaultRoute
server.host
+ server.http2.allowUnsecure
server.keepAliveTimeout
server.maxPayload
server.maxPayloadBytes
server.name
server.port
+ server.protocol
server.publicBaseUrl
server.requestId.allowFromAnyIp
server.requestId.ipAllowlist
server.rewriteBasePath
+ server.restrictInternalApis
server.securityResponseHeaders.disableEmbedding
server.securityResponseHeaders.permissionsPolicy
server.securityResponseHeaders.referrerPolicy
server.securityResponseHeaders.strictTransportSecurity
server.securityResponseHeaders.xContentTypeOptions
+ server.securityResponseHeaders.crossOriginOpenerPolicy
server.shutdownTimeout
server.socketTimeout
server.ssl.cert
@@ -184,13 +195,12 @@ kibana_vars=(
server.uuid
server.xsrf.allowlist
server.xsrf.disableProtection
- server.xsrf.whitelist
status.allowAnonymous
status.v6ApiFormat
telemetry.allowChangingOptInStatus
telemetry.enabled
+ telemetry.hidePrivacyStatement
telemetry.optIn
- telemetry.optInStatusUrl
telemetry.sendUsageTo
telemetry.sendUsageFrom
tilemap.options.attribution
@@ -198,13 +208,12 @@ kibana_vars=(
tilemap.options.minZoom
tilemap.options.subdomains
tilemap.url
- url_drilldown.enabled
vega.enableExternalUrls
vis_type_vega.enableExternalUrls
xpack.actions.allowedHosts
xpack.actions.customHostSettings
- xpack.actions.enabled
xpack.actions.email.domain_allowlist
+ xpack.actions.enableFooterInEmail
xpack.actions.enabledActionTypes
xpack.actions.maxResponseContentLength
xpack.actions.preconfigured
@@ -222,10 +231,18 @@ kibana_vars=(
xpack.alerting.invalidateApiKeysTask.interval
xpack.alerting.invalidateApiKeysTask.removalDelay
xpack.alerting.defaultRuleTaskTimeout
+ xpack.alerting.rules.run.timeout
+ xpack.alerting.rules.run.ruleTypeOverrides
+ xpack.alerting.cancelAlertsOnRuleTimeout
+ xpack.alerting.rules.minimumScheduleInterval.value
+ xpack.alerting.rules.minimumScheduleInterval.enforce
+ xpack.alerting.rules.run.actions.max
+ xpack.alerting.rules.run.alerts.max
+ xpack.alerting.rules.run.actions.connectorTypeOverrides
+ xpack.alerting.maxScheduledPerMinute
xpack.alerts.healthCheck.interval
xpack.alerts.invalidateApiKeysTask.interval
xpack.alerts.invalidateApiKeysTask.removalDelay
- xpack.apm.enabled
xpack.apm.indices.error
xpack.apm.indices.metric
xpack.apm.indices.onboarding
@@ -245,7 +262,8 @@ kibana_vars=(
xpack.banners.placement
xpack.banners.textColor
xpack.banners.textContent
- xpack.canvas.enabled
+ xpack.cases.files.allowedMimeTypes
+ xpack.cases.files.maxSize
xpack.code.disk.thresholdEnabled
xpack.code.disk.watermarkLow
xpack.code.indexRepoFrequencyMs
@@ -268,7 +286,6 @@ kibana_vars=(
xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled
xpack.encryptedSavedObjects.encryptionKey
xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys
- xpack.event_log.enabled
xpack.event_log.indexEntries
xpack.event_log.logEntries
xpack.fleet.agentPolicies
@@ -278,15 +295,12 @@ kibana_vars=(
xpack.fleet.agents.fleet_server.hosts
xpack.fleet.agents.kibana.host
xpack.fleet.agents.tlsCheckDisabled
- xpack.fleet.enabled
xpack.fleet.packages
+ xpack.fleet.packageVerification.gpgKeyPath
xpack.fleet.registryProxyUrl
xpack.fleet.registryUrl
xpack.graph.canEditDrillDownUrls
- xpack.graph.enabled
xpack.graph.savePolicy
- xpack.grokdebugger.enabled
- xpack.infra.enabled
xpack.infra.query.partitionFactor
xpack.infra.query.partitionSize
xpack.infra.sources.default.fields.container
@@ -299,14 +313,12 @@ kibana_vars=(
xpack.infra.sources.default.metricAlias
xpack.ingestManager.fleet.tlsCheckDisabled
xpack.ingestManager.registryUrl
- xpack.license_management.enabled
- xpack.maps.enabled
- xpack.maps.showMapVisualizationTypes
- xpack.ml.enabled
xpack.observability.annotations.index
- xpack.observability.unsafe.alertingExperience.enabled
- xpack.observability.unsafe.cases.enabled
- xpack.painless_lab.enabled
+ xpack.observability.unsafe.alertDetails.metrics.enabled
+ xpack.observability.unsafe.alertDetails.logs.enabled
+ xpack.observability.unsafe.alertDetails.uptime.enabled
+ xpack.observability.unsafe.alertDetails.observability.enabled
+ xpack.observability.unsafe.thresholdRule.enabled
xpack.reporting.capture.browser.autoDownload
xpack.reporting.capture.browser.chromium.disableSandbox
xpack.reporting.capture.browser.chromium.inspect
@@ -334,10 +346,10 @@ kibana_vars=(
xpack.reporting.csv.maxSizeBytes
xpack.reporting.csv.scroll.duration
xpack.reporting.csv.scroll.size
+ xpack.reporting.csv.scroll.strategy
xpack.reporting.csv.useByteOrderMarkEncoding
xpack.reporting.enabled
xpack.reporting.encryptionKey
- xpack.reporting.index
xpack.reporting.kibanaApp
xpack.reporting.kibanaServer.hostname
xpack.reporting.kibanaServer.port
@@ -353,9 +365,9 @@ kibana_vars=(
xpack.reporting.queue.timeout
xpack.reporting.roles.allow
xpack.reporting.roles.enabled
- xpack.rollup.enabled
xpack.ruleRegistry.write.enabled
- xpack.searchprofiler.enabled
+ xpack.screenshotting.browser.chromium.disableSandbox
+ xpack.security.accessAgreement.message
xpack.security.audit.appender.fileName
xpack.security.audit.appender.layout.highlight
xpack.security.audit.appender.layout.pattern
@@ -379,40 +391,36 @@ kibana_vars=(
xpack.security.authc.saml.maxRedirectURLSize
xpack.security.authc.saml.realm
xpack.security.authc.selector.enabled
- xpack.security.authProviders
xpack.security.cookieName
- xpack.security.enabled
xpack.security.encryptionKey
- xpack.security.loginAssistanceMessage
+ xpack.security.experimental.fipsMode.enabled
xpack.security.loginAssistanceMessage
xpack.security.loginHelp
- xpack.security.public.hostname
- xpack.security.public.port
- xpack.security.public.protocol
xpack.security.sameSiteCookies
xpack.security.secureCookies
xpack.security.session.cleanupInterval
+ xpack.security.session.concurrentSessions.maxSessions
xpack.security.session.idleTimeout
xpack.security.session.lifespan
xpack.security.sessionTimeout
xpack.security.showInsecureClusterWarning
xpack.securitySolution.alertMergeStrategy
xpack.securitySolution.alertIgnoreFields
- xpack.securitySolution.endpointResultListDefaultFirstPageIndex
- xpack.securitySolution.endpointResultListDefaultPageSize
+ xpack.securitySolution.maxExceptionsImportSize
xpack.securitySolution.maxRuleImportExportSize
xpack.securitySolution.maxRuleImportPayloadBytes
xpack.securitySolution.maxTimelineImportExportSize
xpack.securitySolution.maxTimelineImportPayloadBytes
xpack.securitySolution.packagerTaskInterval
- xpack.securitySolution.prebuiltRulesFromFileSystem
- xpack.securitySolution.prebuiltRulesFromSavedObjects
- xpack.spaces.enabled
+ xpack.securitySolution.prebuiltRulesPackageVersion
xpack.spaces.maxSpaces
- xpack.task_manager.enabled
- xpack.task_manager.index
+ xpack.task_manager.capacity
+ xpack.task_manager.claim_strategy
+ xpack.task_manager.auto_calculate_default_ech_capacity
+ xpack.task_manager.discovery.active_nodes_lookback
+ xpack.task_manager.discovery.interval
+ xpack.task_manager.kibanas_per_partition
xpack.task_manager.max_attempts
- xpack.task_manager.max_poll_inactivity_cycles
xpack.task_manager.max_workers
xpack.task_manager.monitored_aggregated_stats_refresh_rate
xpack.task_manager.monitored_stats_required_freshness
@@ -425,6 +433,9 @@ kibana_vars=(
xpack.task_manager.version_conflict_threshold
xpack.task_manager.event_loop_delay.monitor
xpack.task_manager.event_loop_delay.warn_threshold
+ xpack.task_manager.worker_utilization_running_average_window
+ xpack.uptime.index
+ serverless
)
longopts=''
@@ -453,7 +464,7 @@ umask 0002
# paths. Therefore, Kibana provides a mechanism to override
# reading the cgroup path from /proc/self/cgroup and instead uses the
# cgroup path defined the configuration properties
-# cpu.cgroup.path.override and cpuacct.cgroup.path.override.
+# ops.cGroupOverrides.cpuPath and ops.cGroupOverrides.cpuAcctPath.
# Therefore, we set this value here so that cgroup statistics are
# available for the container this process will run in.
diff --git a/logstash_7.17.25/Dockerfile b/logstash_7.17.25/Dockerfile
index 62ad56e..68cb3f1 100644
--- a/logstash_7.17.25/Dockerfile
+++ b/logstash_7.17.25/Dockerfile
@@ -5,30 +8,27 @@ RUN for iter in {1..10}; do \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update -y && \
apt-get upgrade -y && \
- apt-get install -y procps findutils tar gzip curl && \
+ apt-get install -y procps findutils tar gzip && \
apt-get install -y locales && \
+ apt-get install -y curl && \
apt-get clean all && \
locale-gen 'en_US.UTF-8' && \
apt-get clean metadata && \
exit_code=0 && break || exit_code=$? && \
- echo "packaging error: retry $iter in 10s" && \
- apt-get clean all && \
+echo "packaging error: retry $iter in 10s" && \
+apt-get clean all && \
apt-get clean metadata && \
- sleep 10; done; \
- (exit $exit_code)
+sleep 10; done; \
+(exit $exit_code)
# Provide a non-root user to run the process.
RUN groupadd --gid 1000 logstash && \
- adduser --uid 1000 --gid 1000 \
- --home /usr/share/logstash --no-create-home \
- logstash
-
+ adduser --uid 1000 --gid 1000 --home /usr/share/logstash --no-create-home logstash
# Add Logstash itself.
-RUN \
- curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-7.17.25-linux-$(arch).tar.gz | \
+RUN curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-8.16.0-linux-$(arch).tar.gz | \
tar zxf - -C /usr/share && \
- mv /usr/share/logstash-7.17.25 /usr/share/logstash && \
+ mv /usr/share/logstash-8.16.0 /usr/share/logstash && \
chown --recursive logstash:logstash /usr/share/logstash/ && \
chown -R logstash:root /usr/share/logstash && \
chmod -R g=u /usr/share/logstash && \
@@ -45,14 +44,29 @@ ENV PATH=/usr/share/logstash/bin:$PATH
# Provide a minimal configuration, so that simple invocations will provide
# a good experience.
-COPY config/pipelines.yml config/pipelines.yml
-COPY config/logstash-full.yml config/logstash.yml
-COPY config/log4j2.properties config/
+ COPY config/logstash-full.yml config/logstash.yml
+COPY config/pipelines.yml config/log4j2.properties config/log4j2.file.properties config/
COPY pipeline/default.conf pipeline/logstash.conf
+
RUN chown --recursive logstash:root config/ pipeline/
# Ensure Logstash gets the correct locale by default.
ENV LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
-COPY env2yaml/env2yaml /usr/local/bin/
+
+COPY env2yaml/env2yaml-amd64 env2yaml/env2yaml-arm64 env2yaml/
+# Copy over the appropriate env2yaml artifact
+RUN env2yamlarch="$(dpkg --print-architecture)"; \
+ case "${env2yamlarch}" in \
+ 'x86_64'|'amd64') \
+ env2yamlarch=amd64; \
+ ;; \
+ 'aarch64'|'arm64') \
+ env2yamlarch=arm64; \
+ ;; \
+ *) echo >&2 "error: unsupported architecture '$env2yamlarch'"; exit 1 ;; \
+ esac; \
+ mkdir -p /usr/local/bin; \
+ cp env2yaml/env2yaml-${env2yamlarch} /usr/local/bin/env2yaml; \
+ rm -rf env2yaml
# Place the startup wrapper script.
COPY bin/docker-entrypoint /usr/local/bin/
@@ -67,15 +82,14 @@ LABEL org.label-schema.schema-version="1.0" \
org.opencontainers.image.vendor="Elastic" \
org.label-schema.name="logstash" \
org.opencontainers.image.title="logstash" \
- org.label-schema.version="7.17.25" \
- org.opencontainers.image.version="7.17.25" \
+ org.label-schema.version="8.16.0" \
+ org.opencontainers.image.version="8.16.0" \
org.label-schema.url="https://www.elastic.co/products/logstash" \
org.label-schema.vcs-url="https://github.com/elastic/logstash" \
org.label-schema.license="Elastic License" \
org.opencontainers.image.licenses="Elastic License" \
org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
- org.label-schema.build-date=2024-10-16T08:48:26+00:00 \
- org.opencontainers.image.created=2024-10-16T08:48:26+00:00
-
+ org.label-schema.build-date=2024-11-06T18:55:01+00:00 \
+ org.opencontainers.image.created=2024-11-06T18:55:01+00:00
ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]
diff --git a/logstash_7.17.25/bin/docker-entrypoint b/logstash_7.17.25/bin/docker-entrypoint
index 19165f1..e2fd33c 100755
--- a/logstash_7.17.25/bin/docker-entrypoint
+++ b/logstash_7.17.25/bin/docker-entrypoint
@@ -6,6 +6,22 @@
# host system.
env2yaml /usr/share/logstash/config/logstash.yml
+if [[ -n "$LOG_STYLE" ]]; then
+ case "$LOG_STYLE" in
+ console)
+ # This is the default. Nothing to do.
+ ;;
+ file)
+ # Overwrite the default config with the stack config. Do this as a
+ # copy, not a move, in case the container is restarted.
+ cp -f /usr/share/logstash/config/log4j2.file.properties /usr/share/logstash/config/log4j2.properties
+ ;;
+ *)
+ echo "ERROR: LOG_STYLE set to [$LOG_STYLE]. Expected [console] or [file]" >&2
+ exit 1 ;;
+ esac
+fi
+
export LS_JAVA_OPTS="-Dls.cgroup.cpuacct.path.override=/ -Dls.cgroup.cpu.path.override=/ $LS_JAVA_OPTS"
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
diff --git a/logstash_8.15.3/config/log4j2.file.properties b/logstash_7.17.25/config/log4j2.file.properties
similarity index 100%
copy from logstash_8.15.3/config/log4j2.file.properties
copy to logstash_7.17.25/config/log4j2.file.properties
diff --git a/logstash_7.17.25/env2yaml/env2yaml b/logstash_7.17.25/env2yaml/env2yaml
deleted file mode 100755
index 91bda12..0000000
diff --git a/logstash_8.15.3/env2yaml/env2yaml-amd64 b/logstash_7.17.25/env2yaml/env2yaml-amd64
similarity index 100%
copy from logstash_8.15.3/env2yaml/env2yaml-amd64
copy to logstash_7.17.25/env2yaml/env2yaml-amd64
diff --git a/logstash_8.15.3/env2yaml/env2yaml-arm64 b/logstash_7.17.25/env2yaml/env2yaml-arm64
similarity index 100%
copy from logstash_8.15.3/env2yaml/env2yaml-arm64
copy to logstash_7.17.25/env2yaml/env2yaml-arm64 Relevant Maintainers:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.