Unable to log into elastic search - docker

I am trying to set up Filebeats/Elasticsearch/Kibana to monitor log files for my application.
I have the fairly minimal compose show below.
When I go to localhost:19200, I was able to get elastic search responses before I enabled security. Now, it prompts me to sign in. However, neither elastic and change nor kibana and changeme are accepted.
Attempting to change the password with curl by
curl -XPOST -u elastic:changeme 'localhost:19200/_security/user/elastic/_password' -H "Content-Type: application/json" -d "{
\"password\" : \"insecure\"
}"
also fails with an authentication error.
From the server log, the error is
elasticsearch_1 | {"type": "server", "timestamp": "2019-09-16T20:59:06,588+0000", "level": "INFO", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "compass", "node.name": "node-1", "cluster.uuid": "RZ_T1pT5Tp--3Jm8q89NVw", "node.id": "Q-lFQ58gRGOPPOEyzy6Vrw", "message": "Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]" }
The JSON returned to curl is
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
What am I doing wrong?
docker-compose.yml
version: "2.4"
services:
# Accumulate logs into elasticstack
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}"
environment:
- http.host=0.0.0.0
- transport.host=127.0.0.1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}"
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- data:/usr/share/elasticsearch/data
#Port 9200 is available on the host. Need to for user to access as well as Packetbeat
ports: ['19200:9200']
#Healthcheck to confirm availability of ES. Other containers wait on this.
healthcheck:
test: ["CMD", "curl","-s" ,"-f", "-u", "elastic:${ES_PASSWORD}", "http://localhost:9200/_cat/health"]
#Internal network for the containers
networks: ['stack']
volumes:
#Es data
data:
driver: local
networks: {stack: {}}
.env
#ELK Stack
ELASTIC_VERSION=7.3.2
ES_PASSWORD=insecure
ES_MEM_LIMIT=2g
ES_JVM_HEAP=1024m
config/elasticsearch/elasticsearch.yml
cluster.name: compass
node.name: node-1
path.data: /usr/share/elasticsearch/data
http.port: 9200
network.host: 0.0.0.0
xpack.security:
enabled: true
transport.ssl.enabled: true

You should set up the built-in user passwords when you enable security, using
./bin/elasticsearch-setup-passwords interactive
See https://www.elastic.co/guide/en/elastic-stack-overview/current/get-started-built-in-users.html

Related

Remote debug with VSCode, go & nakama

i have the following problem. I try to run a nakama gameserver in go with docker.
For the debugging purposes i want to use delve.
I am not really sure if i am proceeding right, so maybe my problem is actually a complete different. But i think my delve does not connect to the nakama build.
What have i done so far?
I created a new go project and put a bit code in the main.go. After that i created a Dockerfile and an docker-compose.yml.
I think in one of these two files is the mistake.
My Dockerfile looks like this:
FROM golang
ENV GOPATH /home/marc/go_projects
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:$PATH
RUN go install github.com/go-delve/delve/cmd/dlv#latest
FROM heroiclabs/nakama-pluginbuilder:3.3.0 AS go-builder
ENV GO111MODULE on
ENV CGO_ENABLED 1
WORKDIR $GOPATH/gamedev
COPY go.mod .
COPY main.go .
COPY vendor/ vendor/
RUN go build --trimpath --mod=vendor --buildmode=plugin -o ./backend.so
FROM heroiclabs/nakama:3.3.0
COPY --from=go-builder /backend/backend.so /nakama/data/modules/
COPY local.yml /nakama/data/
An my docker-compose.yml
version: '3'
services:
postgres:
container_name: postgres
image: postgres:9.6-alpine
environment:
- POSTGRES_DB=nakama
- POSTGRES_PASSWORD=localdb
volumes:
- data:/var/lib/postgresql/data
expose:
- "8080"
- "5432"
ports:
- "5432:5432"
- "8080:8080"
nakama:
container_name: nakama
image: heroiclabs/nakama:3.12.0
entrypoint:
- "/bin/sh"
- "-ecx"
- >
/nakama/nakama migrate up --database.address postgres:localdb#postgres:5432/nakama &&
exec /nakama/nakama --name nakama1 --database.address postgres:localdb#postgres:5432/nakama --logger.level DEBUG --session.token_expiry_sec 7200
restart: always
links:
- "postgres:db"
depends_on:
- postgres
volumes:
- ./:/nakama/data
expose:
- "7349"
- "7350"
- "7351"
- "2345"
ports:
- "2345:2345"
- "7349:7349"
- "7350:7350"
- "7351:7351"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7350/"]
interval: 10s
timeout: 5s
retries: 5
volumes:
data:
When i build and run the docker image, it runs with no complains. I can open the webinterface of nakama, so this is running fine.
But when i try to connect the debugger, it looks like he create a succesfull connection but closes it right away.
SO my launch.json config is the following:
"name": "Connect to server",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "/home/marc/go_projects/bin/dlv",
"port": 2345,
"host": "127.0.0.1",
"trace": "verbose"
This is what i get in /tmp/vs-code-debug.txt:
[07:24:05.882 UTC] From client: initialize({"clientID":"vscode","clientName":"Visual Studio Code","adapterID":"go","pathFormat":"path","linesStartAt1":true,"columnsStartAt1":true,"supportsVariableType":true,"supportsVariablePaging":true,"supportsRunInTerminalRequest":true,"locale":"de","supportsProgressReporting":true,"supportsInvalidatedEvent":true,"supportsMemoryReferences":true})
[07:24:05.882 UTC] InitializeRequest
[07:24:05.882 UTC] To client: {"seq":0,"type":"response","request_seq":1,"command":"initialize","success":true,"body":{"supportsConditionalBreakpoints":true,"supportsConfigurationDoneRequest":true,"supportsSetVariable":true}}
[07:24:05.883 UTC] InitializeResponse
[07:24:05.883 UTC] From client: attach({"name":"Connect to server","type":"go","request":"attach","mode":"remote","remotePath":"/home/marc/go_projects/bin/dlv","port":2345,"host":"127.0.0.1","trace":"verbose","__configurationTarget":5,"packagePathToGoModPathMap":{"/home/marc/go_projects/gamedev":"/home/marc/go_projects/gamedev"},"debugAdapter":"legacy","showRegisters":false,"showGlobalVariables":false,"substitutePath":[],"showLog":false,"logOutput":"debugger","dlvFlags":[],"hideSystemGoroutines":false,"dlvLoadConfig":{"followPointers":true,"maxVariableRecurse":1,"maxStringLen":64,"maxArrayValues":64,"maxStructFields":-1},"cwd":"/home/marc/go_projects/gamedev","dlvToolPath":"/home/marc/go_projects/bin/dlv","env":{"ELECTRON_RUN_AS_NODE":"1","GJS_DEBUG_TOPICS":"JS ERROR;JS LOG","USER":"marc","SSH_AGENT_PID":"1376","XDG_SESSION_TYPE":"x11","SHLVL":"0","HOME":"/home/marc","DESKTOP_SESSION":"ubuntu","GIO_LAUNCHED_DESKTOP_FILE":"/usr/share/applications/code.desktop","GTK_MODULES":"gail:atk-bridge","GNOME_SHELL_SESSION_MODE":"ubuntu","MANAGERPID":"1053","DBUS_SESSION_BUS_ADDRESS":"unix:path=/run/user/1000/bus","GIO_LAUNCHED_DESKTOP_FILE_PID":"6112","IM_CONFIG_PHASE":"1","MANDATORY_PATH":"/usr/share/gconf/ubuntu.mandatory.path","LOGNAME":"marc","_":"/usr/share/code/code","JOURNAL_STREAM":"8:44286","DEFAULTS_PATH":"/usr/share/gconf/ubuntu.default.path","XDG_SESSION_CLASS":"user","USERNAME":"marc","GNOME_DESKTOP_SESSION_ID":"this-is-deprecated","WINDOWPATH":"2","PATH":"/home/marc/.nvm/versions/node/v17.8.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/go/bin","SESSION_MANAGER":"local/mobile:#/tmp/.ICE-unix/1458,unix/mobile:/tmp/.ICE-unix/1458","INVOCATION_ID":"fe605ca56aa646859602b81e264bf01b","XDG_RUNTIME_DIR":"/run/user/1000","XDG_MENU_PREFIX":"gnome-","DISPLAY":":0","LANG":"de_DE.UTF-8","XDG_CURRENT_DESKTOP":"Unity","XAUTHORITY":"/run/user/1000/gdm/Xauthority","XDG_SESSION_DESKTOP":"ubuntu","XMODIFIERS":"#im=ibus","SSH_AUTH_SOCK":"/run/user/1000/keyring/ssh","SHELL":"/bin/bash","QT_ACCESSIBILITY":"1","GDMSESSION":"ubuntu","GPG_AGENT_INFO":"/run/user/1000/gnupg/S.gpg-agent:0:1","GJS_DEBUG_OUTPUT":"stderr","QT_IM_MODULE":"ibus","PWD":"/home/marc","XDG_DATA_DIRS":"/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop","XDG_CONFIG_DIRS":"/etc/xdg/xdg-ubuntu:/etc/xdg","_WSREP_START_POSITION":"","CHROME_DESKTOP":"code-url-handler.desktop","ORIGINAL_XDG_CURRENT_DESKTOP":"ubuntu:GNOME","VSCODE_CWD":"/home/marc","GDK_BACKEND":"x11","VSCODE_NLS_CONFIG":"{\"locale\":\"de\",\"availableLanguages\":{\"*\":\"de\"},\"_languagePackId\":\"b61d3f473b0358bc955527db7340fd23.de\",\"_translationsConfigFile\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/tcf.json\",\"_cacheRoot\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de\",\"_resolvedLanguagePackCoreLocation\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/30d9c6cd9483b2cc586687151bcbcd635f373630\",\"_corruptedFile\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/corrupted.info\",\"_languagePackSupport\":true}","VSCODE_CODE_CACHE_PATH":"/home/marc/.config/Code/CachedData/30d9c6cd9483b2cc586687151bcbcd635f373630","VSCODE_IPC_HOOK":"/run/user/1000/vscode-432c1660-1.68.1-main.sock","VSCODE_PID":"6112","NVM_INC":"/home/marc/.nvm/versions/node/v17.8.0/include/node","LS_COLORS":"","NVM_DIR":"/home/marc/.nvm","LESSCLOSE":"/usr/bin/lesspipe %s %s","LESSOPEN":"| /usr/bin/lesspipe %s","NVM_CD_FLAGS":"","NVM_BIN":"/home/marc/.nvm/versions/node/v17.8.0/bin","GOPATH":"/home/marc/go_projects","VSCODE_AMD_ENTRYPOINT":"vs/workbench/api/node/extensionHostProcess","VSCODE_PIPE_LOGGING":"true","VSCODE_VERBOSE_LOGGING":"true","VSCODE_LOG_NATIVE":"false","VSCODE_HANDLES_UNCAUGHT_ERRORS":"true","VSCODE_LOG_STACK":"false","VSCODE_IPC_HOOK_EXTHOST":"/run/user/1000/vscode-ipc-8cf508cc-d427-4616-b6b5-61d3c3e5d99f.sock","APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL":"1","GOMODCACHE":"/home/marc/go_projects/pkg/mod","GOPROXY":"https://proxy.golang.org,direct"},"__sessionId":"1893ab9e-5a19-45e7-8b39-46db079cdbe3"})
[07:24:05.883 UTC] AttachRequest
[07:24:05.884 UTC] Start remote debugging: connecting 127.0.0.1:2345
[07:24:06.191 UTC] To client: {"seq":0,"type":"event","event":"initialized"}
[07:24:06.192 UTC] InitializeEvent
[07:24:06.192 UTC] To client: {"seq":0,"type":"response","request_seq":2,"command":"attach","success":true}
[07:24:06.194 UTC] [Error] Socket connection to remote was closed
[07:24:06.194 UTC] Sending TerminatedEvent as delve is closed
[07:24:06.194 UTC] To client: {"seq":0,"type":"event","event":"terminated"}
[07:24:06.201 UTC] From client: configurationDone(undefined)
[07:24:06.201 UTC] ConfigurationDoneRequest
[07:24:06.225 UTC] From client: disconnect({"restart":false})
[07:24:06.225 UTC] DisconnectRequest
I tried to change the remote path i the launch.json multiple times, try to math the paths in the docker files.
Maybe i have to change the implementation of delve in docker, but tbh i dont really know how. I dont really find a good documentation on how to do this.
I was having the same problem.
I solved it like this:
adding line: "debugAdapter": "dlv-dap" on my launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Delve into Docker",
"type": "go",
"request": "attach",
"debugAdapter": "dlv-dap",
"mode": "remote",
"substitutePath": [
{
"from": "${workspaceFolder}/",
"to": "/app",
},
],
"port": 2345,
"host": "127.0.0.1",
"showLog": false,
"apiVersion": 2,
"trace": "verbose"
}
]
}

Running ELK on docker, Kibana says: Unable to retrieve version information from Elasticsearch nodes

I was referring to example given in the elasticsearch documentation for starting elastic stack (elastic and kibana) on docker using docker compose. It gives example of docker compose version 2.2 file. So, I tried to convert it to docker compose version 3.8 file. Also, it creates three elastic nodes and has security enabled. I want to keep it minimal to start with. So I tried to turn off security and also reduce the number of elastic nodes to 2. This is how my current compose file looks like:
version: "3.8"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- xpack.security.enabled=false
deploy:
resources:
limits:
memory: 1g
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
# [
# "CMD-SHELL",
# # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
# ]
# Changed to:
test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
- es01
image: docker.elastic.co/kibana/kibana:8.0.0-amd64
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://localhost:9200
deploy:
resources:
limits:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
kibanadata:
driver: local
Then, I tried to run it:
docker stack deploy -c docker-compose.nosec.noenv.yml elk
Creating network elk_default
Creating service elk_es01
Creating service elk_kibana
When I tried to check their status, it displayed following:
$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3dcd08134e38 docker.elastic.co/kibana/kibana:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (health: starting) 5601/tcp elk_kibana.1.ng8aspz9krfnejfpsnqzl2sci
7b548a43c45c docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (healthy) 9200/tcp, 9300/tcp elk_es01.1.d9a107j6wkz42shti3n6kpfmx
I noticed that kibana's status gets stuck at (health: starting). When I checked Kibana's logs with command docker service logs -f elk_kibana, it had following WARN and ERROR lines:
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 127.0.0.1:9200
It seems that kibana is not able to connect with Elasticsearch, but why? Is it because of disabling of security and that we cannot have security disabled?
PS-1: Earlier, when I set elasticsearch host as follows in kibana's environment in the docker compose file:
ELASTICSEARCH_HOSTS=https://es01:9200 # that is 'es01' instead of `localhost`
it gave me following error:
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND es01
So, after checking this question, I changed es01 to localhost as specified earlier (that is in complete docker compose file content before PS-1.)
PS-2: Replacing localhost with 192.168.0.104 gives following error
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 192.168.0.104:9200
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. write EPROTO 140274197346240:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
Try this :
ELASTICSEARCH_HOSTS=http://es01:9200
I don't know why it can run in my PC, since Elasticsearch is supossed use SSL. But in your case using http working just fine.

How to connect Winlogbeat to Elasticsearch dockrized Cluster using SSL?

For the past week I am trying to connect a Winlogbeat(Which is on my host machine) To an elasticsearch Cluster that I set up on an Ubuntu VM using dockers.
Following this tutorial. (In the tutorial they don't explain how to connect a Beat)
My problem is with the SSL configuration (Of the Winlogbeat) I just can't get it right for some reason.
This is the error I get on the windows machine after running the setup command (.\winlogbeat.exe setup -e) -
2021-02-22T01:42:13.286+0200 ERROR instance/beat.go:971 Exiting: couldn't connect to any of
the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at
https://192.168.216.129:9200: Get "https://192.168.216.129:9200": x509: certificate signed by unknown
authority]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to
Elasticsearch at https://192.168.216.129:9200: Get "https://192.168.216.129:9200": x509: certificate
signed by unknown authority]
And on the Elasticsearch node I get this error -
es01 | "at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-
common-4.1.49.Final.jar:4.1.49.Final]",
es01 | "at java.lang.Thread.run(Thread.java:832) [?:?]",
es01 | "Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate",
I tried different methods without any success -
Using openssl to generate self-signed certificate for Winlogbeat (Using this tutorial).
After that I tried to add the new CA I generated to the es_certs volume and I tried to modify the
elastic-docker-tls.yml so it will except the new CA (I failed at that).
I changed the instances.yml file by adding a winlogbeat section -
- name: winlogbeat
dns:
- <My Computer Name>
ip:
- 192.168.1.136
and ran docker-compose -f create-certs.yml run --rm create_certs on a fresh install of the stack which
resulted in the creation of a winlogbeat.crt and winlogbeat.key but still it didn't work.
I also tried to play with the verfication_mode changing it to "none" but it didn't work either.
I don't know what else to try and I failed to find a good source that details the ssl configuration to beats to elk on a docker environment.
This is the elastic-docker-tls.yml file:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=trial
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
ports:
- 9200:9200
networks:
- elastic
healthcheck:
test: curl --cacert $CERTS_DIR/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=trial
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es02/es02.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.license.self_generated.type=trial
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=$CERTS_DIR/es03/es03.key
- xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.http.ssl.certificate=$CERTS_DIR/es03/es03.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
- xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt
- xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
- certs:$CERTS_DIR
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
ELASTICSEARCH_USERNAME: kibana_system
ELASTICSEARCH_PASSWORD: CHANGEME
ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: $CERTS_DIR/ca/ca.crt
SERVER_SSL_ENABLED: "true"
SERVER_SSL_KEY: $CERTS_DIR/kib01/kib01.key
SERVER_SSL_CERTIFICATE: $CERTS_DIR/kib01/kib01.crt
volumes:
- certs:$CERTS_DIR
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
certs:
driver: local
networks:
elastic:
driver: bridge
This is the Winlogbeat configuration
###################### Winlogbeat Configuration Example ########################
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: System
- name: Security
processors:
- script:
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- name: Microsoft-Windows-Sysmon/Operational
processors:
- script:
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- name: Windows PowerShell
event_id: 400, 403, 600, 800
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: Microsoft-Windows-PowerShell/Operational
event_id: 4103, 4104, 4105, 4106
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: ForwardedEvents
tags: [forwarded]
processors:
- script:
when.equals.winlog.channel: Security
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- script:
when.equals.winlog.channel: Microsoft-Windows-Sysmon/Operational
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- script:
when.equals.winlog.channel: Windows PowerShell
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- script:
when.equals.winlog.channel: Microsoft-Windows-PowerShell/Operational
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
# ====================== Elasticsearch template settings =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "https://192.168.216.129:5601"
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# verification_mode: none
username: "elastic"
password: "XXXXXXXXXXXXXXXXX"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Winlogbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://192.168.216.129:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
output.elasticsearch.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
output.elasticsearch.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
output.elasticsearch.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# output.elasticsearch.sslverification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "XXXXXXXXXXXXXXX"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Winlogbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Winlogbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the winlogbeat.
#instrumentation:
# Set to true to enable instrumentation of winlogbeat.
#enabled: false
# Environment in which winlogbeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
So it took me some time, but I've figured out what was the problem with my certificate.
I didn't add it to the trusted root store on my windows machine.
In the end I've created a Winlogbeat crt and key using the elasticsearch-certutil tool by adding a Winlogbeat instance to the instances.yml file and copied the winlogbeat.crt, winlogbeat.key and ca.crt to my windows machine.
Note - You can find all of them under /var/lib/docker/volumes/es_certs/_data/
On the windows machine I configured the Winlogbeat the normal way and in the end I've added the ca.crt to the trusted root store using this tutorial.

How can one deploy elasticsearch on docker swarm without declaring a service for each es node?

My idea here was to specifically put set the endpoint_mode to dnsrr in hopes that each elasticsearch task in the service would pick up a random IP address when starting the discovery seeding process. This did not work. What happens it that it goes into a loop while seeding and never decides on a master. I specifically want to find some elegant configuration to deploy this on every node that doesn't include static configuration files or multiple service declarations in the compose-file.
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
environment:
- cluster.name=es-cluster
- discovery.type=zen
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- network.host=0.0.0.0
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- elasticsearch
deploy:
mode: global
endpoint_mode: dnsrr
volumes:
...
elasticsearch_data:
networks:
...
elasticsearch:
After deploying I just see this in the logs repeatedly (for each task):
web-services_elasticsearch.0.a6ilzu8ev6sp#dsnode3.<redacted> | {"type": "server",
"timestamp": "2020- 11-10T21:47:38,443Z", "level": "WARN", "component":
"o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "es-cluster", "node.name":
"718a5740c014", "message": "master not discovered yet, this node has not previously
joined a bootstrapped (v7+) cluster, and this node must discover master-eligible
nodes [elasticsearch] to bootstrap a cluster: have discovered [{718a5740c014}
{wMUW_UyQSzSxJDaz5jYVUw} {iBXmoZqHSp24fKSEUvuUpw}{10.0.46.129}{10.0.46.129:9300}
{dilmrt} {ml.machine_memory=67428794368, xpack.installed=true, transform.node=true,
ml.max_open_jobs=20}, {99680d51349a}{BBqgO_1wRamiJZGibrhVTw}{WH6Oy5gOR1Cqs-2LUzu9Mw}
{10.0.46.123} {10.0.46.123:9300}{dilmrt}{ml.machine_memory=67424575488,
ml.max_open_jobs=20, xpack.installed=true, transform.node=true}, {27bf85fa9967}
{Fm9r8aX4Rr-xa8AF_dUaLQ} {LsL5cXghTayDtXtLeWGV3Q}{10.0.46.126}{10.0.46.126:9300}
{dilmrt} {ml.machine_memory=67424575488, ml.max_open_jobs=20, xpack.installed=true,
transform.node=true}, {fc2900a7227a}{FUztkTZQQuWqi3lyuOOfSQ}{JzACTEvzSnepc9RGXBXCzw}
{10.0.46.131} {10.0.46.131:9300}{dilmrt}{ml.machine_memory=67428794368,
ml.max_open_jobs=20, xpack.installed=true, transform.node=true}, {7b23784db269}
{Dn472Qx5RyekUMY2jeLjIA}{w- m1g1BtS6SzZkttZEidnA}{10.0.46.122}{10.0.46.122:9300}
{dilmrt}{ml.machine_memory=67424575488, ml.max_open_jobs=20, xpack.installed=true,
transform.node=true}, {7d41b0583448} {1pj_6kJZQu6n39waIzcvQQ}{xwNHOG15Q_6jwY20BbBuyQ}
{10.0.46.125}{10.0.46.125:9300}{dilmrt} {ml.machine_memory=67428794368,
ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]; discovery will
continue using [10.0.46.123:9300, 10.0.46.126:9300, 10.0.46.131:9300,
10.0.46.122:9300, 10.0.46.125:9300] from hosts providers and [{718a5740c014}
{wMUW_UyQSzSxJDaz5jYVUw} {iBXmoZqHSp24fKSEUvuUpw}{10.0.46.129}{10.0.46.129:9300}
{dilmrt} {ml.machine_memory=67428794368, xpack.installed=true, transform.node=true,
ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted
version 0 in term 0" }
I feel like it's just a matter of the node.name not matching the hostname or something BUT the hostnames are being set correctly. Example:
# docker exec -it web-services_elasticsearch.e3eaakkldj7pa3ygwkqwxts4i.7gjyxyjfjscve6rw41of7fyld hostname
fc2900a7227a
Any ideas would be greatly appreciated.
One must set the hostname and node.name to "{{.Node.Hostname}}" and then list all the possible hostnames in the cluster.initial_master_nodes environment. The "{{.Node.Hostname}}" will be the hostname of the docker swarm node.
The following works.
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
hostname: "{{.Node.Hostname}}"
environment:
- node.name={{.Node.Hostname}}
- cluster.name=ds-cluster
- "ES_JAVA_OPTS=-Xms10g -Xmx10g"
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=fqdn1.domainname.com,fqdn2.domainname.com,fqdn3.domainname.com,fqdn4.domainname.com,fqdn5.domainname.com,fqdn6.domainname.com
- node.ml=false
- xpack.ml.enabled=true
- xpack.monitoring.enabled=true
- xpack.monitoring.collection.enabled=true
- xpack.security.enabled=false
- xpack.watcher.enabled=true
- bootstrap.memory_lock=false
networks:
- backend
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
memory: 20G

How to deploy elasticsearch with docker swarm?

I create 3 virtual machine use docker-machine,there are:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster - virtualbox Running tcp://192.168.99.101:2376 v18.09.5
cluster2 - virtualbox Running tcp://192.168.99.102:2376 v18.09.5
master - virtualbox Running tcp://192.168.99.100:2376 v18.09.5
and then I create a docker swarm in master machine:
docker-machine ssh master "docker swarm init ----advertise-addr 192.168.99.100"
and in cluster and cluster2 join master:
docker-machine ssh cluster "docker swarm join --advertise-addr 192.168.99.101 --token xxxx 192.168.99.100:2377"
docker-machine ssh cluster2 "docker swarm join --advertise-addr 192.168.99.102 --token xxxx 192.168.99.100:2377"
the docker node ls info:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
r4a6y9wie4zp3pl4wi4e6wqp8 cluster Ready Active 18.09.5
sg9gq6s3k6vty7qap7co6eppn cluster2 Ready Active 18.09.5
xb6telu8cn3bfmume1kcektkt * master Ready Active Leader 18.09.5
there is deploy config swarm.yml:
version: "3.3"
services:
elasticsearch:
image: elasticsearch:7.0.0
ports:
- "9200:9200"
- "9300:9300"
environment:
- cluster.name=elk
- network.host=_eth1:ipv4_
- network.bind_host=_eth1:ipv4_
- network.publish_host=_eth1:ipv4_
- discovery.seed_hosts=192.168.99.100,192.168.99.101
- cluster.initial_master_nodes=192.168.99.100,192.168.99.101
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- backend
deploy:
mode: replicated
replicas: 3
#endpoint_mode: dnsrr
restart_policy:
condition: none
resources:
limits:
cpus: "1.0"
memory: "1024M"
reservations:
memory: 20M
networks:
backend:
# driver: overlay
# attachable: true
i pull elasticsearch image to virtual machie:
docker-machine ssh master "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster2 "docker image pull elasticsearch:7.0.0"
before run i run this command fix some elasticearch bootstrap error:
docker-machine ssh master "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster2 "sudo sysctl -w vm.max_map_count=262144"
and then i run `docker stack deploy -c swarm.yml es, the elasticsearch cluster cannot work.
docker-machine ssh master
docker service logs es_elasticsearch -f
show:
es_elasticsearch.1.uh1x0s9qr7mb#cluster | {"type": "server", "timestamp": "2019-04-25T16:28:47,143+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "e8dba5562417", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{e8dba5562417}{Jy3t0AAkSW-jY-IygOCjOQ}{z7MYIf5wTfOhCX1r25wNPg}{10.255.0.46}{10.255.0.46:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.2.swswlwmle9e9#cluster2 | {"type": "server", "timestamp": "2019-04-25T16:28:47,389+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "af5d88a04b42", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{af5d88a04b42}{zhxMeNMAQN2evKDlsA33qA}{fpYPTvJ6STmyqrgxlMkD_w}{10.255.0.47}{10.255.0.47:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.3.x8ouukovhh80#master | {"type": "server", "timestamp": "2019-04-25T16:28:48,818+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "0e7e4d96b31a", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{0e7e4d96b31a}{Xs9966RjTEWvEbuj4-ySYA}{-eV4lvavSHq6JhoW0qWu6A}{10.255.0.48}{10.255.0.48:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
I guess the cluster formation failed may be due to network configuration error. I don't know how to fix it, I try many times modify the config, fail and fail again.
try, this is working :) docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
hostname: "{{.Node.Hostname}}"
environment:
- node.name={{.Node.Hostname}}
- cluster.name=my-cluster
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=node1,node2,node3
- node.ml=false
- xpack.ml.enabled=false
- xpack.monitoring.enabled=false
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
memory: 4G
nginx:
image: nginx:1.17.1-alpine
ports:
- 9200:9200
deploy:
mode: global
command: |
/bin/sh -c "echo '
user nobody nogroup;
worker_processes auto;
events {
worker_connections 1024;
}
http {
client_max_body_size 4g;
resolver 127.0.0.11 ipv6=off;
server {
listen *:9200;
location / {
proxy_set_header Connection keep-alive;
set $$url http://elasticsearch:9200;
proxy_pass $$url;
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
}
}
}' | tee /etc/nginx/nginx.conf && nginx -t && nginx -g 'daemon off;'"
volumes:
elasticsearch-data:
Trying to manually specify all the specific IP's and bindings is tricky because of the swarm overlaying network.
Instead, simply make your ES nodes discoverable and let Swarm take care of the node discovery and communication. To make them discoverable, we can use a predictable name like the Swarm node hostname.
Try change your environment settings in the swarm.yml file as follows:
environment:
- network.host=0.0.0.0
- discovery.seed_hosts=elasticsearch #Service name, to let Swarm handle discovery
- cluster.initial_master_nodes=master,cluster,cluster2 #Swarm nodes host names
- node.name={{.Node.Hostname}} #To create a predictable node name
This of course assumes that we already known the swarm hostnames, which you pointed out in the screenshot above. Without knowing these values, we would have no way of having a predictable set of node names to look for. In that case, you could create 1 ES node entry with a particular node name, and then another entry which references the first entry's node name as the cluster.initial_master_nodes.
Use dnsrr mode without ports. Expose elasticsearch with nginx ;)
See my docker-compose.yml
In my experience https://github.com/shazChaudhry/docker-elastic works perfectly, and just one file from the entire repo is enough. I downloaded https://github.com/shazChaudhry/docker-elastic/blob/master/docker-compose.yml and removed the logstash bits, I didn't need that. Then added the following to .bashrc
export ELASTICSEARCH_HOST=$(hostname)
export ELASTICSEARCH_PASSWORD=foobar
export ELASTICSEARCH_USERNAME=elastic
export ELASTIC_VERSION=7.4.2
export INITIAL_MASTER_NODES=$ELASTICSEARCH_HOST
And docker stack deploy --compose-file docker-compose.yml elastic works.
Ideas I gleaned from Ahmet Vehbi Olgaç 's docker-compose.yml, which worked for me:
Use deployment / mode: global. This will cause the swarm to deploy one replica to each swarm worker, for each node that is configured like this.
Use deployment / endpoint_mode: dnsrr. This will let all containers in the swarm access the nodes by the service name.
Use hostname: {{.Node.Hostname}} or a similar template-based expression. This ensures a unique name for each deployed container.
Use environment / node.name={{.Node.Hostname}}. Again, you can vary the pattern. The point is that each es node should get a unique name.
Use cluster.initial_master_nodes=*hostname1*,*hostname2*,.... Assuming you know the hostnames of your docker worker machines. Use whatever pattern you used in #3, but substitute out the whole hostname, and include all the hostnames.
If you don't know your hostnames, you can do what Andrew Cachia's answer suggests: set up one container (do not replicate it) to act solely as the master seed and give it a predictable hostname, then have all other nodes refer to that node as the master seed. However, this introduces a single point of failure.
Elasticsearch 8.5.0 answer.
For my needs, I didn't want to add a reverse-proxy/load balancer, but I do want to expose port 9200 on the swarm nodes where Elasticsearch replicas are running (using just swarm), so that external clients can access the Elasticsearch REST API. So I used endpoint mode dnsrr (ref) and exposed port 9200 on the hosts where the replicas run.
If you don't need to expose port 9200 (i.e., nothing will connect to the elasticsearch replicas outside of swarm), remove the ports: config from the elasticsearch service.
I also only want elasticsearch replicas to run on a subset of my swarm nodes (3 of them). I created docker node label elasticsearch on those three nodes. Then mode: global and constraint node.labels.elasticsearch==True will ensure 1 replica runs on each of those nodes.
I run kibana on one of those 3 nodes too: swarm can pick which one, since port 5601 is exposed on swarm's ingress overlay network.
Lines you'll likely need to edit are maked with ######.
# docker network create -d overlay --attachable elastic-net
# cat elastic-stack-env
#!/bin/bash
export STACK_VERSION=8.5.0 # Elasticsearch and Kibana version
export ES_PORT=9200 # port to expose Elasticsearch HTTP API to the host
export KIBANA_PORT=5601 # port to expose Kibana to the host
read -p "Enter elastic user password: " ELASTIC_PASSWORD
read -p "Enter kibana_system user password: " KIBANA_PASSWORD
export KIBANA_URL=https://kibana.my-domain.com:$KIBANA_PORT #######
export SHARED_DIR=/some/nfs/or/shared/storage/elastic #######
export KIBANA_SSL_KEY_PATH=config/certs/kibana.key
export KIBANA_SSL_CERT_PATH=config/certs/kibana.crt
export ELASTIC_NODES=swarm_node1,swarm_node2,swarm_node3 #######
# ELASTIC_NODES must match what docker reports from {{.Node.Hostname}}
export KIBANA_SSL_CERT_AUTH_PATH=config/certs/My_Root_CA.crt #######
export CLUSTER_NAME=docker-cluster
export MEM_LIMIT=4294967296 # 4 GB; increase or decrease based on the available host memory (in bytes)
# cat elastic-stack.yml
version: "3.8"
services:
elasticsearch:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} ####### I have a local registry
deploy:
endpoint_mode: dnsrr
mode: global # but note constraints below
placement:
constraints:
- node.labels.elasticsearch==True
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
- /path/to/some/local/storage/elasticsearch:/usr/share/elasticsearch/data
ports: ##### remove if nothing outside of swarm needs to access port 9200
- target: 9200
published: ${ES_PORT} # we publish this port so that external clients can access the ES REST API
protocol: tcp
mode: host # required when using dnsrr
environment: # https://www.elastic.co/guide/en/elasticsearch/reference/master/settings.html
# https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#docker-configuration-methods
- node.name={{.Node.Hostname}} # see Andrew Cachia's answer
- cluster.name=${CLUSTER_NAME}
- discovery.seed_hosts=elasticsearch # use service name here, since (docker's) DNS is used:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#unicast.hosts
- cluster.initial_master_nodes=${ELASTIC_NODES} # use node.names here
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#initial_master_nodes
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
healthcheck:
test:
[ "CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
logging: # we use rsyslog
driver: syslog
options:
syslog-facility: "local2"
kibana:
# this service depends on the setup service (defined below), but docker stack has no
# way to specify dependencies, but more importantly, there's been a move away from this:
# https://stackoverflow.com/a/47714157/215945
image: localhost:5000/kibana:${STACK_VERSION:?} ######
hostname: kibana
deploy:
placement:
constraints:
- node.labels.elasticsearch==True # run KB on any one of the ES nodes
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/kibana:/usr/share/kibana/data
- ${SHARED_DIR:?}/certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment: # https://www.elastic.co/guide/en/kibana/master/settings.html
# https://www.elastic.co/guide/en/kibana/master/docker.html#environment-variable-config
# CAPS_WITH_UNDERSCORES must be used with Kibana
- SERVER_NAME=kibana
- ELASTICSEARCH_HOSTS=["https://elasticsearch:9200"]
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_PUBLICBASEURL=${KIBANA_URL}
# if you don't want to use https/TLS with Kibana, comment-out
# the next four lines
- SERVER_SSL_ENABLED=true
- SERVER_SSL_KEY=${KIBANA_SSL_KEY_PATH}
- SERVER_SSL_CERTIFICATE=${KIBANA_SSL_CERT_PATH}
- SERVER_SSL_CERTIFICATEAUTHORITIES=${KIBANA_SSL_CERT_AUTH_PATH}
- TELEMETRY_OPTIN=false
healthcheck:
test:
[
"CMD-SHELL",
"curl -sIk https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
logging:
driver: syslog
options:
syslog-facility: "local2"
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
deploy:
placement:
constraints:
- node.labels.elasticsearch==True
restart_policy: # https://docs.docker.com/compose/compose-file/compose-file-v3/#restart_policy
condition: none
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
command: >
bash -c '
until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 | grep -q "missing authentication credentials"
do
echo "waiting 30 secs for Elasticsearch availability..."
sleep 30
done
echo "setting kibana_system password"
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"
do
echo "waiting 10 secs before trying to set password again..."
sleep 10
done
echo "done"
'
logging:
driver: syslog
options:
syslog-facility: "local2"
networks:
elastic-net:
external: true
Deploy:
# . ./elastic-stack-env
# docker stack deploy -c elastic-stack.yml elastic
# # ... after Kibana comes up, you can remove the setup service if you want:
# docker service rm elastic_setup
Here's how I created the Elasticsearch CA and cert:
# cat elastic-certs.yml
version: "3.8"
services:
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
user: "0:0"
command: >
bash -c '
if [ ! -f certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: elasticsearch\n"\
" dns:\n"\
" - elasticsearch\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
fi;
sleep infinity
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
interval: 1s
timeout: 5s
retries: 120
# . ./elastic-stack-env
# docker stack deploy -c elastic-certs.yml elastic-certs
# # ... ensure files are created under $SHARED_DIR/certs, then
# docker stack rm elastic-certs
How I created the Kibana cert is outside the scope of this question.
I run a Fluent Bit swarm service (mode: global, docker network elastic-net) to send logs to the elasticsearch service. Although outside the scope of this question, here's the salient config:
[OUTPUT]
name es
match <whatever is appropriate for you here>
host elasticsearch
port 9200
index my-index-default
http_user fluentbit
http_passwd ${FLUENTBIT_PASSWORD}
tls on
tls.ca_file /certs/ca/ca.crt
tls.crt_file /certs/elasticsearch/elasticsearch.crt
tls.key_file /certs/elasticsearch/elasticsearch.key
retry_limit false
suppress_type_name on
# trace_output on
Host elasticsearch will be resolved by docker's DNS server to the three IP addresses of the elasticsearch replicas, so there is no single point of failure.

Resources