How to configure Kibana for Swisscom elasticsearch public cloud (CloudFoundry) - docker
Note: This question is specific to the Elasticsearch service provided by Swisscom
Question: (a.k.a: tl;dr)
What configuration is required to get the official Kibana docker container to connect to a Swisscom Elasticsearch Service?
Background:
Up until about a year ago the Swisscom public cloud offered a full ELK stack (Elasticsearch, Logstash, Kibana) in a single service offering. When this service was discontinued, Swisscom replaced it by just offering the Elasticsearch service and asked clients to setup their own Kibana and Logstash solutions via provided CloudFoundry build_packs (Kibana, Logstash). The migration recommendation was discussed here: https://ict.swisscom.ch/2018/04/building-the-elk-stack-on-our-new-elasticsearch/
More recently, the underlying OS (called "stack") that runs the applications on Swisscom's CloudFoundry-based PaaS offering, has been upgraded. The aforementioned build_packs are now outdated and have been declared as deprecated by Swisscom. The suggestion now is to move to a generic Docker container provided by Elastic as discussed here: https://github.com/swisscom/kibana-buildpack/issues/3
What I tried:
CloudFoundry generally works well with Docker containers and the whole thing should be as straight forward as providing some valid configuration to the docker container. My current manifest.yml for Kibana looks something like this, but the Kibana application ultimately fails to connect:
---
applications:
- name: kibana-test-example
docker:
image: docker.elastic.co/kibana/kibana:6.1.4
memory: 4G
disk_quota: 5G
services:
- elasticsearch-test-service
env:
SERVER_NAME: kibana-test
ELASTICSEARCH_URL: https://abcdefghijk.elasticsearch.lyra-836.appcloud.swisscom.com
ELASTICSEARCH_USERNAME: username_provided_by_elasticsearch_service
ELASTICSEARCH_PASSWORD: password_provided_by_elasticsearch_service
XPACK_MONITORING_ENABLED: true
Additional Info:
The Elasticsearch Service provided by Swisscom currently runs on version 6.1.3. As far as I'm aware it has x-pack installed.
What errors are you getting?
I played around with the configuration a bit and have seen different errors, most of which appear to be related to failing authentication against the Elasticsearch Service.
Here is some exemplary initial log output (seriously, though, you need a running Kibana just to be able to read that...)
2019-05-10T08:08:34.43+0200 [CELL/0] OUT Cell eda692ed-f4c3-4a5e-86aa-c0d1641b029f successfully created container for instance 385e5b7f-1570-46cd-532a-c5b4
2019-05-10T08:08:48.60+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:08:48Z","tags":["info","optimize"],"pid":6,"message":"Optimizing and caching bundles for graph, monitoring, apm, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page. This may take a few minutes"}
2019-05-10T08:15:07.68+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["info","optimize"],"pid":6,"message":"Optimization of bundles for graph, monitoring, apm, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page complete in 379.08 seconds"}
2019-05-10T08:15:07.77+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:kibana#6.1.4","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.82+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:elasticsearch#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.86+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:xpack_main#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.86+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:graph#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.88+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:monitoring#6.1.4","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:xpack_main#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:graph#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:elasticsearch#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:11.39+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
2019-05-10T08:15:11.39+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["status","plugin:reporting#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from uninitialized to red - Authentication Exception","prevState":"uninitialized","prevMsg":"uninitialized"}
The actually relevant error message seems to be this:
2019-05-10T08:15:11.66+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["license","warning","xpack"],"pid":6,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. [security_exception] unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}"}
I have tried to set XPACK_SECURITY_ENABLED: false as recommended elsewhere as well as setting the actual SERVER_HOST, which seemed to make things worse.
I would very much appreciate a working example from someone using the existing Kibana docker images to connect to the Swisscom-provided Elasticsearch Service.
Could it be that you confused username and password? When I check my service-key password comes before username, which might have lead to a copy-paste error on your side:
cf service-key myece mykey|grep kibana_system
"kibana_system_password": "aKvOpMVrXGCJ4PJht",
"kibana_system_username": "aksTxVNyLU4JWiQOE6V",
I tried pushing Kibana with your manifest.yml and it works perfectly in my case.
Swisscom has also updated the documentation on how to use Kibana and Logstash with Docker:
https://docs.developer.swisscom.com/service-offerings/kibana-docker.html
https://docs.developer.swisscom.com/service-offerings/logstash-docker.html
Related
How to edit bad gateway page on traefik
I want to edit the Bad Gateway page from traefik to issue a command like docker restart redis Does anyone have an idea on how to do this? A bit of background: I have a somewhat broken setup of Traefik v2.5 and Authelia on my development server, where sometimes I get a Bad Gateway Error when accessing a page. Usually this is fixed by clearing all sessions from redis. I tried to locate the bug, but the error logs aren't helpful and I don't have the time and skills to make the bug reproduceable or find the broken configuration. So instead I always use ssh into the maschine and reset redis manually
Docker login to Gitea registry fails even though curl succeeds
I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images: Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530 While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds: $ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog {"repositories":[...populated list...]} # Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes $ cat /etc/docker/daemon.json { "insecure-registries": ["gitea.avril"] } $ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530 The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized. I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530. I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour. EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines: ...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo ...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo ...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo whereas attempts to docker login result in: ...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress). I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue. Text below preserved for posterity: ...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well... TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost Background Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea. I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1]. Cold-start problem However, now there is a cold-start problem: (Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image) Gitea('s public address) will be unavailable until Cloudflared tunnels start up. To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes: Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt Gitea (Pod and Ingress) start up Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys gitea.scubbo.org is now available (via Cloudflared) So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3]. So what? My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself? However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX. Footnotes [0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue! [1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared". [2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it. [3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.
ELK stack error elastic search don't authorize Logstash
I followed up this blog to start ELK stack from docker compose file but used version 8.1.2. It is not running successfully elastic search don't authorize Logstash. The error from Logstash is [main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
did you try to use HTTPS instead of HTTP as the security in ES 8 version is enabled by default.
Trouble in Wazuh 4 production cluster (docker) with lostash ouput in elasticsearch
I have the production cluster of Wazuh 4 with open-distro for elasticsearch, kibana and ssl security in docker and I am trying to connect logstash (a docker image of logstash) with elasticsearch and I am getting this: Attempted to resurrect connection to dead ES instance, but got an error I have generated ssl certificates for logstash, tried other ways (changed the output of logstash , through filebeat modules) to connect without success. What is the solution for this problem for Wazuh 4?
Let me help you with this. Our current documentation is valid for distributed architectures where Logstash is installed on the same machine as Elasticsearch, so we should consider adding documentation for the proper configuration of separated Logstash instances. Ok, now let’s see if we can fix your problem. After installing Logstash, I assume that you configured it using the distributed configuration file, as seen on this step (Logstash.2.b). Keep in mind that you need to specify the Elasticsearch IP address at the bottom of the file: output { elasticsearch { hosts => ["<PUT_HERE_ELASTICSEARCH_IP>:9200"] index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}" document_type => "wazuh" } } After saving the file and restarting the Logstash service, you may be getting this kind of log message on /var/log/logstash/logstash-plain.log: Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.56.104:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.56.104:9200/][Manticore::SocketException] Connection refused (Connection refused)"} I discovered that we need to edit the Elasticsearch configuration file, and modify this setting: network.host. On my test environment, this setting appears commented like this: #network.host: 192.168.0.1 And I changed it to this: network.host: 0.0.0.0 (Notice that I removed the # at the beginning of the line). The 0.0.0.0 IP will make Elasticsearch listen on all network interfaces. After that, I restarted the Elasticsearch service using systemctl restart elasticsearch, and then, I started to see the alerts being indexed on Elasticsearch. Please, try these steps, and let’s see if everything is properly working now. Let me know if you need more help with this, I’ll be glad to assist you. Regards,
Nextcloud in docker behind traefik on unraid
I'm running traefik as a reverse proxy on my unraid (6.6.6) Apps like, sonarr/radarr, nzbget, organizr, all work fine. But that's mostly due to the fact that these are super easy to set up. You only need 4 traefik specific labels and that's it. traefik.enable=true traefik.backend=radarr traefik.frontend.rule=PathPrefix: /radarr traefik.port=7878 traefik.frontend.auth.basic.users=username:password So far so good, everything is using ssl and working great. But as soon as I have to configure some extra stuff for the containers to work behind a reverse proxy I get lost. I've read dozens of guides regarding nextcloud, but I can't get it to work. Currently I'm using the linuxserver/nextcloud docker and from my internal network it's working great. I got everything set up, added users and smb shares and everybody can connect fine. But I can't get it to work behind traefik using a subdirectory. It's probably just some traefik labels I need to add to the nextcloud container, but I'm simply too much of a newb to know which ones I need. My first issue was that nextcloud forces https, which traefik doesn't like unless you configure some stuff. So for now I'm just using the traefik.frontend.auth.forward.tls.insecureSkipVerify=true label to work around this. I know it's potentially a security issue, but if I'm not mistaken it only opens up the possibility of a man in the middle attack. Which shouldn't be too much of an issue since both traefik and nextcloud are running on the same machine (and besides everything else is going over http). So now that I got that working I get a Error 500 message when I try to open mydomain.tld/nextcloud. The traefik log says "Error calling . Cause: Get : unsupported protocol scheme \"\"" I tried adding some labels I found in a guide (https://www.smarthomebeginner.com/traefik-reverse-proxy-tutorial-for-docker/#NextCloud_Your_Own_Cloud_Storage) "traefik.frontend.headers.SSLRedirect=true" "traefik.frontend.headers.STSSeconds=315360000" "traefik.frontend.headers.browserXSSFilter=true" "traefik.frontend.headers.contentTypeNosniff=true" "traefik.frontend.headers.forceSTSHeader=true" "traefik.frontend.headers.SSLHost=mydomain.tld" "traefik.frontend.headers.STSPreload=true" "traefik.frontend.headers.frameDeny=true" I just thought I'd try it, maybe I get lucky. Sadly I didn't. Still Error 500.
In your traefik logs enable using: loglevel = "DEBUG" More info here:https://docs.traefik.io/configuration/logs/ After doing this I realized that my docker label was not correctly applying the InsecureSkipVerify = true line in my config. The error I was able to see in the logs was: 500 Internal Server Error' caused by: x509: cannot validate certificate for 172.17.0.x because it doesn't contain any IP SANs" To work around this I had to add InsecureSkipVerify = true directly to the traefik.toml file for this to work correctly.