Skip to content

Errors with Apache basic auth in front of Elasticsearch and Kibana #10252

@pgporada

Description

@pgporada

Kibana version: 5.2.0-1.x86_64

Elasticsearch version: 5.2.0-1.noarch

Server OS version: CentOS 7

Browser version: Firefox 51.0.1

Browser OS version: Ubuntu 16.10

Original install method (e.g. download page, yum, from source, etc.): yum

Description of the problem including expected versus actual behavior:
I believe I am running into a similar issue that #3302 had.

My setup is as follows http://imgur.com/HMBOTux. There is basic auth for both kibana and the elasticsearch boxes.

[Kibana <=> Apache] <----> LB ---> (Cluster of boxes that run [Apache <==> Elasticsearch])

When I navigate to kibana.mycompany.com I am greeted with a basic auth popup. The elasticsearch credentials are required multiple times to authenticate into kibana, not the kibana basic auth credentials. Right away this is very odd and wrong. I have verified that the correct credentials are deployed to the correct servers. I have disabled basic auth on kibana and am still presented with the elasticsearch basic auth popup.

When I disabled basic auth on the apache instances fronting kibana and elasticsearch, everything works.

Here are my configurations
/etc/kibana/kibana.yml

server.port: 5601
server.host: "127.0.0.1"
server.name: "ip-192-168-1-10"
elasticsearch.url: "https://esuser:espass@elasticsearch.mycompany.internal:443"
elasticsearch.preserveHost: false
kibana.index: ".kibana"
elasticsearch.username: "esuser"
elasticsearch.password: "espass"
elasticsearch.ssl.verify: false
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.startupTimeout: 5000
logging.dest: /var/log/kibana/kibana.log
logging.quiet: true

kibana vhost

ServerName kibana.mycompany.com
ErrorLog /var/log/httpd/error_log
CustomLog /var/log/httpd/access_log combined
DocumentRoot /var/www/app/public_html
<Directory "/var/www/app/public_html">
    AllowOverride All
    Options -Indexes -FollowSymLinks -Includes
    DirectoryIndex index.html index.php
    Require all granted
</Directory>

<LocationMatch "^/(.*)>
    AuthType Basic
    AuthName "Restricted"
    AuthBasicProvider file
    AuthUserFile "/opt/kibana/htpasswd"
    Require valid-user
</LocationMatch>

# CVE-2016-5385, CVE-2016-5387
RequestHeader unset Proxy early

# Hide git related stuff
RewriteRule ^(.*/)?\.git+ - [R=404,L]
RewriteRule ^(.*/)?\.gitignore+ - [R=404,L]

ProxyRequests Off
ProxyPreserveHost On
ProxyPass        / http://127.0.0.1:5601/
ProxyPassReverse / http://127.0.0.1:5601/

elasticsearch vhost(s)

ServerName es.mycompany.com
ErrorLog /var/log/httpd/error_log
CustomLog /var/log/httpd/access_log combined
DocumentRoot /var/www/app/public_html

<Directory "/var/www/app/public_html">
    AllowOverride All
    Options -Indexes -FollowSymLinks -Includes
    DirectoryIndex index.html index.php
    Require all granted
</Directory>

<LocationMatch "^/(.*)>
    AuthType Basic
    AuthName "Restricted"
    AuthBasicProvider file
    AuthUserFile "/opt/elasticsearch/htpasswd"
    Require valid-user
</LocationMatch>

# CVE-2016-5385, CVE-2016-5387
RequestHeader unset Proxy early

# Hide git related stuff
RewriteRule ^(.*/)?\.git+ - [R=404,L]
RewriteRule ^(.*/)?\.gitignore+ - [R=404,L]

ProxyRequests Off
ProxyPreserveHost On
ProxyPass        / http://0.0.0.0:9200/
ProxyPassReverse / http://0.0.0.0:9200/

Steps to reproduce:

  1. Deploy apache + kibana on a node
  2. Deploy apache + elasticsearch on a different node

Errors in browser console (if relevant):
/var/log/kibana/kibana.log

{"type":"log","@timestamp":"2017-02-08T20:45:50Z","tags":["listening","info"],"pid":32381,"message":"Server running at http://127.0.0.1:5601"}
{"type":"log","@timestamp":"2017-02-08T20:45:50Z","tags":["status","plugin:elasticsearch@5.2.0","error"],"pid":32381,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-02-08T20:45:50Z","tags":["status","ui settings","error"],"pid":32381,"state":"red","message":"Status changed from yellow to red - Elasticsearch plugin is red","prevState":"yellow","prevMsg":"Elasticsearch plugin is yellow"}
{"type":"log","@timestamp":"2017-02-08T20:46:19Z","tags":["listening","info"],"pid":32402,"message":"Server running at http://127.0.0.1:5601"}
{"type":"log","@timestamp":"2017-02-08T20:46:22Z","tags":["status","plugin:elasticsearch@5.2.0","error"],"pid":32402,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-02-08T20:46:22Z","tags":["status","ui settings","error"],"pid":32402,"state":"red","message":"Status changed from yellow to red - Elasticsearch plugin is red","prevState":"yellow","prevMsg":"Elasticsearch plugin is yellow"}
{"type":"log","@timestamp":"2017-02-08T20:48:04Z","tags":["listening","info"],"pid":32440,"message":"Server running at http://127.0.0.1:5601"}
{"type":"log","@timestamp":"2017-02-08T20:48:07Z","tags":["status","plugin:elasticsearch@5.2.0","error"],"pid":32440,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-02-08T20:48:07Z","tags":["status","ui settings","error"],"pid":32440,"state":"red","message":"Status changed from yellow to red - Elasticsearch plugin is red","prevState":"yellow","prevMsg":"Elasticsearch plugin is yellow"}

/var/log/elasticsearch/elasticsearch.log

[2017-02-08T14:47:12,426][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [aggs-matrix-stats]
[2017-02-08T14:47:12,426][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [ingest-common]
[2017-02-08T14:47:12,426][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [lang-expression]
[2017-02-08T14:47:12,426][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [lang-groovy]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [lang-mustache]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [lang-painless]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [percolator]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [reindex]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [transport-netty3]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded module [transport-netty4]
[2017-02-08T14:47:12,427][INFO ][o.e.p.PluginsService     ] [ip-192-168-1-20] loaded plugin [discovery-ec2]
[2017-02-08T14:47:18,520][INFO ][o.e.n.Node               ] [ip-192-168-1-20] initialized
[2017-02-08T14:47:18,520][INFO ][o.e.n.Node               ] [ip-192-168-1-20] starting ...
[2017-02-08T14:47:18,686][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 71:ec:1a:66:d7:49:15:0d
[2017-02-08T14:47:18,806][INFO ][o.e.t.TransportService   ] [ip-192-168-1-20] publish_address {192.168.1.20:9300}, bound_addresses {[::]:9300}
[2017-02-08T14:47:18,811][INFO ][o.e.b.BootstrapChecks    ] [ip-192-168-1-20] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-02-08T14:47:23,817][INFO ][o.e.c.s.ClusterService   ] [ip-192-168-1-20] new_master {ip-192-168-1-20}{9FRxvWz_TrqxGJqcmgaBOw}{_a-5tz8ITBaX32pKraEJHA}{192.168.1.20}{192.168.1.20:9300}{aws_availability_zone=us-east-1a}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-02-08T14:47:23,881][INFO ][o.e.h.HttpServer         ] [ip-192-168-1-20] publish_address {192.168.1.20:9200}, bound_addresses {[::]:9200}
[2017-02-08T14:47:23,881][INFO ][o.e.n.Node               ] [ip-192-168-1-20] started
[2017-02-08T14:47:24,111][INFO ][o.e.g.GatewayService     ] [ip-192-168-1-20] recovered [1] indices into cluster_state
[2017-02-08T14:47:24,521][INFO ][o.e.c.r.a.AllocationService] [ip-192-168-1-20] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

Metadata

Metadata

Assignees

No one assigned

    Labels

    Team:OperationsTeam label for Operations TeamnotabugWhen the issue is closed this label Indicates that it wasn't a bug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions