I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");
I am currently having an app with 3 different databases (it's for a test). I have the following docker image:
Dockerfile
FROM golang:1.15
WORKDIR /myapp
# Download wait for it tool.
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /wait-for-it
RUN chmod +x /wait-for-it
and the following docker-compose.yml
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- .:/app
command: sh -c "/wait-for-it postgres:10001 -- /wait-for-it oracle:10000 -- /wait-for-it mongodb:10002"
depends_on:
- oracle
- mongodb
- postgres
ports:
- "8080:8080"
oracle:
image: chameleon82/oracle-xe-10g:latest
ports:
- "10000:8080"
expose:
- 10000
postgres:
image: postgres:9.6-alpine
ports:
- "10001:5432"
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_DB=testdb
expose:
- 10001
mongodb:
image: mongo:latest
ports:
- "10002:27017"
expose:
- 10002
The thing is, as you saw, my app listens on 8080, but so Oracle does. I know I can change my app port, but still, I would like to switch Oracle to another port. I am trying to achieve it from port mapping, but I do feel that it only works for the host machine, not for a use from within the docker-compose, am I wrong?
Think of services within a Docker Compose network (i.e. app, oracle) as distinct hosts. Each is addressable by its service name, i.e. app should refer to the oracle service by this name (oracle).
The port mapping allows you to expose (and map) service ports within a Docker Compose network to (possibly different) host ports. This is commonly because, the host has a singular dimension of port spaces (0...65535) whereas each service within the Docker Compose network has a port space. 2 services (e.g. http1 and http2) may each use port e.g. 8080 but there's only one 8080 on the host and so, to expose each of these services on your host, one would have to yield; one could also be on the host's 8080 but the other would need to be elsewhere, perhaps 8081.
In your case, e.g. oracle runs on 8080 within the Docker Compose network and is exposed on the host's port 10000. As far as the app service is concerned, this service is available as oracle:8080 (8080 not 10000) within the Docker Compose network.
The expose syntax is purely documentary and has no functional effect.
Responding to comments
If I run your Compose script as-is, it does not work. This is expected because e.g. postgres is available on 5432 within the Compose network not on 10001
docker-compose logs app
Attaching to 63690852_app_1
app_1 | wait-for-it: waiting 15 seconds for postgres:10001
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for postgres:10001
app_1 | wait-for-it: waiting 15 seconds for oracle:10000
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for oracle:10000
app_1 | wait-for-it: waiting 15 seconds for mongodb:10002
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for mongodb:10002
If I correct the ports:
command: sh -c "/wait-for-it postgres:5432 -- /wait-for-it oracle:8080 -- /wait-for-it mongodb:27017"
It works as expected:
docker-compose logs app
Attaching to 63690852_app_1
app_1 | wait-for-it: waiting 15 seconds for postgres:5432
app_1 | wait-for-it: postgres:5432 is available after 0 seconds
app_1 | wait-for-it: waiting 15 seconds for oracle:8080
app_1 | wait-for-it: oracle:8080 is available after 8 seconds
app_1 | wait-for-it: waiting 15 seconds for mongodb:27017
app_1 | wait-for-it: mongodb:27017 is available after 0 seconds
I try to connect elasticsearch on Gitlab CI for running my test but always got connection refused
here my script
services:
- name: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
command: [ "bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node" ]
script:
- curl "http://$(cat /etc/hosts |grep $HOSTNAME|cut -d$'\t' -f1):9200"
Wtih $(cat /etc/hosts |grep $HOSTNAME|cut -d$'\t' -f1) I can get IP address
I try to use alias: elasticsearch but when I curl "http://elasticsearch:9200" also got error Could not resolve host
I create 3 virtual machine use docker-machine,there are:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster - virtualbox Running tcp://192.168.99.101:2376 v18.09.5
cluster2 - virtualbox Running tcp://192.168.99.102:2376 v18.09.5
master - virtualbox Running tcp://192.168.99.100:2376 v18.09.5
and then I create a docker swarm in master machine:
docker-machine ssh master "docker swarm init ----advertise-addr 192.168.99.100"
and in cluster and cluster2 join master:
docker-machine ssh cluster "docker swarm join --advertise-addr 192.168.99.101 --token xxxx 192.168.99.100:2377"
docker-machine ssh cluster2 "docker swarm join --advertise-addr 192.168.99.102 --token xxxx 192.168.99.100:2377"
the docker node ls info:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
r4a6y9wie4zp3pl4wi4e6wqp8 cluster Ready Active 18.09.5
sg9gq6s3k6vty7qap7co6eppn cluster2 Ready Active 18.09.5
xb6telu8cn3bfmume1kcektkt * master Ready Active Leader 18.09.5
there is deploy config swarm.yml:
version: "3.3"
services:
elasticsearch:
image: elasticsearch:7.0.0
ports:
- "9200:9200"
- "9300:9300"
environment:
- cluster.name=elk
- network.host=_eth1:ipv4_
- network.bind_host=_eth1:ipv4_
- network.publish_host=_eth1:ipv4_
- discovery.seed_hosts=192.168.99.100,192.168.99.101
- cluster.initial_master_nodes=192.168.99.100,192.168.99.101
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- backend
deploy:
mode: replicated
replicas: 3
#endpoint_mode: dnsrr
restart_policy:
condition: none
resources:
limits:
cpus: "1.0"
memory: "1024M"
reservations:
memory: 20M
networks:
backend:
# driver: overlay
# attachable: true
i pull elasticsearch image to virtual machie:
docker-machine ssh master "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster2 "docker image pull elasticsearch:7.0.0"
before run i run this command fix some elasticearch bootstrap error:
docker-machine ssh master "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster2 "sudo sysctl -w vm.max_map_count=262144"
and then i run `docker stack deploy -c swarm.yml es, the elasticsearch cluster cannot work.
docker-machine ssh master
docker service logs es_elasticsearch -f
show:
es_elasticsearch.1.uh1x0s9qr7mb#cluster | {"type": "server", "timestamp": "2019-04-25T16:28:47,143+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "e8dba5562417", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{e8dba5562417}{Jy3t0AAkSW-jY-IygOCjOQ}{z7MYIf5wTfOhCX1r25wNPg}{10.255.0.46}{10.255.0.46:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.2.swswlwmle9e9#cluster2 | {"type": "server", "timestamp": "2019-04-25T16:28:47,389+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "af5d88a04b42", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{af5d88a04b42}{zhxMeNMAQN2evKDlsA33qA}{fpYPTvJ6STmyqrgxlMkD_w}{10.255.0.47}{10.255.0.47:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.3.x8ouukovhh80#master | {"type": "server", "timestamp": "2019-04-25T16:28:48,818+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "0e7e4d96b31a", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{0e7e4d96b31a}{Xs9966RjTEWvEbuj4-ySYA}{-eV4lvavSHq6JhoW0qWu6A}{10.255.0.48}{10.255.0.48:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
I guess the cluster formation failed may be due to network configuration error. I don't know how to fix it, I try many times modify the config, fail and fail again.
try, this is working :) docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
hostname: "{{.Node.Hostname}}"
environment:
- node.name={{.Node.Hostname}}
- cluster.name=my-cluster
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=node1,node2,node3
- node.ml=false
- xpack.ml.enabled=false
- xpack.monitoring.enabled=false
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
memory: 4G
nginx:
image: nginx:1.17.1-alpine
ports:
- 9200:9200
deploy:
mode: global
command: |
/bin/sh -c "echo '
user nobody nogroup;
worker_processes auto;
events {
worker_connections 1024;
}
http {
client_max_body_size 4g;
resolver 127.0.0.11 ipv6=off;
server {
listen *:9200;
location / {
proxy_set_header Connection keep-alive;
set $$url http://elasticsearch:9200;
proxy_pass $$url;
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
}
}
}' | tee /etc/nginx/nginx.conf && nginx -t && nginx -g 'daemon off;'"
volumes:
elasticsearch-data:
Trying to manually specify all the specific IP's and bindings is tricky because of the swarm overlaying network.
Instead, simply make your ES nodes discoverable and let Swarm take care of the node discovery and communication. To make them discoverable, we can use a predictable name like the Swarm node hostname.
Try change your environment settings in the swarm.yml file as follows:
environment:
- network.host=0.0.0.0
- discovery.seed_hosts=elasticsearch #Service name, to let Swarm handle discovery
- cluster.initial_master_nodes=master,cluster,cluster2 #Swarm nodes host names
- node.name={{.Node.Hostname}} #To create a predictable node name
This of course assumes that we already known the swarm hostnames, which you pointed out in the screenshot above. Without knowing these values, we would have no way of having a predictable set of node names to look for. In that case, you could create 1 ES node entry with a particular node name, and then another entry which references the first entry's node name as the cluster.initial_master_nodes.
Use dnsrr mode without ports. Expose elasticsearch with nginx ;)
See my docker-compose.yml
In my experience https://github.com/shazChaudhry/docker-elastic works perfectly, and just one file from the entire repo is enough. I downloaded https://github.com/shazChaudhry/docker-elastic/blob/master/docker-compose.yml and removed the logstash bits, I didn't need that. Then added the following to .bashrc
export ELASTICSEARCH_HOST=$(hostname)
export ELASTICSEARCH_PASSWORD=foobar
export ELASTICSEARCH_USERNAME=elastic
export ELASTIC_VERSION=7.4.2
export INITIAL_MASTER_NODES=$ELASTICSEARCH_HOST
And docker stack deploy --compose-file docker-compose.yml elastic works.
Ideas I gleaned from Ahmet Vehbi Olgaç 's docker-compose.yml, which worked for me:
Use deployment / mode: global. This will cause the swarm to deploy one replica to each swarm worker, for each node that is configured like this.
Use deployment / endpoint_mode: dnsrr. This will let all containers in the swarm access the nodes by the service name.
Use hostname: {{.Node.Hostname}} or a similar template-based expression. This ensures a unique name for each deployed container.
Use environment / node.name={{.Node.Hostname}}. Again, you can vary the pattern. The point is that each es node should get a unique name.
Use cluster.initial_master_nodes=*hostname1*,*hostname2*,.... Assuming you know the hostnames of your docker worker machines. Use whatever pattern you used in #3, but substitute out the whole hostname, and include all the hostnames.
If you don't know your hostnames, you can do what Andrew Cachia's answer suggests: set up one container (do not replicate it) to act solely as the master seed and give it a predictable hostname, then have all other nodes refer to that node as the master seed. However, this introduces a single point of failure.
Elasticsearch 8.5.0 answer.
For my needs, I didn't want to add a reverse-proxy/load balancer, but I do want to expose port 9200 on the swarm nodes where Elasticsearch replicas are running (using just swarm), so that external clients can access the Elasticsearch REST API. So I used endpoint mode dnsrr (ref) and exposed port 9200 on the hosts where the replicas run.
If you don't need to expose port 9200 (i.e., nothing will connect to the elasticsearch replicas outside of swarm), remove the ports: config from the elasticsearch service.
I also only want elasticsearch replicas to run on a subset of my swarm nodes (3 of them). I created docker node label elasticsearch on those three nodes. Then mode: global and constraint node.labels.elasticsearch==True will ensure 1 replica runs on each of those nodes.
I run kibana on one of those 3 nodes too: swarm can pick which one, since port 5601 is exposed on swarm's ingress overlay network.
Lines you'll likely need to edit are maked with ######.
# docker network create -d overlay --attachable elastic-net
# cat elastic-stack-env
#!/bin/bash
export STACK_VERSION=8.5.0 # Elasticsearch and Kibana version
export ES_PORT=9200 # port to expose Elasticsearch HTTP API to the host
export KIBANA_PORT=5601 # port to expose Kibana to the host
read -p "Enter elastic user password: " ELASTIC_PASSWORD
read -p "Enter kibana_system user password: " KIBANA_PASSWORD
export KIBANA_URL=https://kibana.my-domain.com:$KIBANA_PORT #######
export SHARED_DIR=/some/nfs/or/shared/storage/elastic #######
export KIBANA_SSL_KEY_PATH=config/certs/kibana.key
export KIBANA_SSL_CERT_PATH=config/certs/kibana.crt
export ELASTIC_NODES=swarm_node1,swarm_node2,swarm_node3 #######
# ELASTIC_NODES must match what docker reports from {{.Node.Hostname}}
export KIBANA_SSL_CERT_AUTH_PATH=config/certs/My_Root_CA.crt #######
export CLUSTER_NAME=docker-cluster
export MEM_LIMIT=4294967296 # 4 GB; increase or decrease based on the available host memory (in bytes)
# cat elastic-stack.yml
version: "3.8"
services:
elasticsearch:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} ####### I have a local registry
deploy:
endpoint_mode: dnsrr
mode: global # but note constraints below
placement:
constraints:
- node.labels.elasticsearch==True
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
- /path/to/some/local/storage/elasticsearch:/usr/share/elasticsearch/data
ports: ##### remove if nothing outside of swarm needs to access port 9200
- target: 9200
published: ${ES_PORT} # we publish this port so that external clients can access the ES REST API
protocol: tcp
mode: host # required when using dnsrr
environment: # https://www.elastic.co/guide/en/elasticsearch/reference/master/settings.html
# https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#docker-configuration-methods
- node.name={{.Node.Hostname}} # see Andrew Cachia's answer
- cluster.name=${CLUSTER_NAME}
- discovery.seed_hosts=elasticsearch # use service name here, since (docker's) DNS is used:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#unicast.hosts
- cluster.initial_master_nodes=${ELASTIC_NODES} # use node.names here
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#initial_master_nodes
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
healthcheck:
test:
[ "CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
logging: # we use rsyslog
driver: syslog
options:
syslog-facility: "local2"
kibana:
# this service depends on the setup service (defined below), but docker stack has no
# way to specify dependencies, but more importantly, there's been a move away from this:
# https://stackoverflow.com/a/47714157/215945
image: localhost:5000/kibana:${STACK_VERSION:?} ######
hostname: kibana
deploy:
placement:
constraints:
- node.labels.elasticsearch==True # run KB on any one of the ES nodes
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/kibana:/usr/share/kibana/data
- ${SHARED_DIR:?}/certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment: # https://www.elastic.co/guide/en/kibana/master/settings.html
# https://www.elastic.co/guide/en/kibana/master/docker.html#environment-variable-config
# CAPS_WITH_UNDERSCORES must be used with Kibana
- SERVER_NAME=kibana
- ELASTICSEARCH_HOSTS=["https://elasticsearch:9200"]
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_PUBLICBASEURL=${KIBANA_URL}
# if you don't want to use https/TLS with Kibana, comment-out
# the next four lines
- SERVER_SSL_ENABLED=true
- SERVER_SSL_KEY=${KIBANA_SSL_KEY_PATH}
- SERVER_SSL_CERTIFICATE=${KIBANA_SSL_CERT_PATH}
- SERVER_SSL_CERTIFICATEAUTHORITIES=${KIBANA_SSL_CERT_AUTH_PATH}
- TELEMETRY_OPTIN=false
healthcheck:
test:
[
"CMD-SHELL",
"curl -sIk https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
logging:
driver: syslog
options:
syslog-facility: "local2"
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
deploy:
placement:
constraints:
- node.labels.elasticsearch==True
restart_policy: # https://docs.docker.com/compose/compose-file/compose-file-v3/#restart_policy
condition: none
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
command: >
bash -c '
until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 | grep -q "missing authentication credentials"
do
echo "waiting 30 secs for Elasticsearch availability..."
sleep 30
done
echo "setting kibana_system password"
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"
do
echo "waiting 10 secs before trying to set password again..."
sleep 10
done
echo "done"
'
logging:
driver: syslog
options:
syslog-facility: "local2"
networks:
elastic-net:
external: true
Deploy:
# . ./elastic-stack-env
# docker stack deploy -c elastic-stack.yml elastic
# # ... after Kibana comes up, you can remove the setup service if you want:
# docker service rm elastic_setup
Here's how I created the Elasticsearch CA and cert:
# cat elastic-certs.yml
version: "3.8"
services:
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
user: "0:0"
command: >
bash -c '
if [ ! -f certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: elasticsearch\n"\
" dns:\n"\
" - elasticsearch\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
fi;
sleep infinity
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
interval: 1s
timeout: 5s
retries: 120
# . ./elastic-stack-env
# docker stack deploy -c elastic-certs.yml elastic-certs
# # ... ensure files are created under $SHARED_DIR/certs, then
# docker stack rm elastic-certs
How I created the Kibana cert is outside the scope of this question.
I run a Fluent Bit swarm service (mode: global, docker network elastic-net) to send logs to the elasticsearch service. Although outside the scope of this question, here's the salient config:
[OUTPUT]
name es
match <whatever is appropriate for you here>
host elasticsearch
port 9200
index my-index-default
http_user fluentbit
http_passwd ${FLUENTBIT_PASSWORD}
tls on
tls.ca_file /certs/ca/ca.crt
tls.crt_file /certs/elasticsearch/elasticsearch.crt
tls.key_file /certs/elasticsearch/elasticsearch.key
retry_limit false
suppress_type_name on
# trace_output on
Host elasticsearch will be resolved by docker's DNS server to the three IP addresses of the elasticsearch replicas, so there is no single point of failure.
On a current project I use Docker. I must clarify that I am pretty inexperienced at it.
My project is a PHP/Symfony project. Until then, I used nginx:alpine and phpdocker/php-fpm to have my project running on my dev environment. However, I found these unfit to my case as my production actually uses Apache.
I found another project I'm on uses the webdevops Docker images without trouble. I want to replace the two containers listed above with a single one, the webdevops/php-apache-dev:alpine docker image.
Although the configuration between the two projects seems almost identical, my dev environment does not seem to work properly: I end up with this:
This site can’t be reached - 172.18.0.7 refused to connect.
(I also use Traefik, but the routed URI does not work any better. The error message is slightly different though: Bad Gateway).
I find myself unable to debug this. I don't even know where to look.
Below is the docker-compose.yml configuration I want to use:
version: '3.2'
services:
app:
image: webdevops/php-apache-dev:alpine
container_name: my-app
working_dir: /app
env_file: .env
environment:
WEB_DOCUMENT_ROOT: /public
WEB_DOCUMENT_INDEX: index.php
LOG_STDOUT: ./var/log/app.stdout.log
LOG_STDERR: ./var/log/app.stderr.log
# #todo list of unwanted PHP modules, cf. https://dockerfile.readthedocs.io/en/latest/content/DockerImages/dockerfiles/php-apache-dev.html#php-modules
# PHP_DISMOD:
php.error_reporting: E_ALL
PHP_DISPLAY_ERRORS: 1
PHP_POST_MAX_SIZE: 80M
PHP_UPLOAD_MAX_FILESIZE: 200M
PHP_MEMORY_LIMIT: 521M
PHP_MAX_EXECUTION_TIME: 300
PHP_DATE_TIMEZONE: Europe/Paris
volumes:
- .:/app
# - ./docker/apache2/conf.d:/opt/docker/etc/httpd/conf.d
- ~/.ssh:/home/application/.ssh:ro
- ~/.composer:/home/application/.composer
depends_on:
- elasticsearch
- database
The other containers work just as well as they did before. This one is the only one that fails.
When calling docker-compose up no error is thrown. All the logs I could find within the container remain silent. As far I as can tell, Traefik does not seem to be the problem. Here is the result of docker ps:
[/var/www/html/citizen-game]$ docker ps *[master]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9639e7a84d webdevops/php-apache-dev:alpine "/entrypoint supervi…" 4 hours ago Up 4 hours 80/tcp, 443/tcp, 9000/tcp my-app-app
be1b90fdf768 docker.elastic.co/elasticsearch/elasticsearch:6.2.4 "/usr/local/bin/dock…" 4 hours ago Up 4 hours (healthy) 9200/tcp, 9300/tcp my-app-elasticsearch
76fb8743a12f phpmyadmin/phpmyadmin "/run.sh supervisord…" 4 hours ago Up 4 hours 80/tcp, 9000/tcp my-app-phpmyadmin
dd41b4afe267 mysql:5.7 "docker-entrypoint.s…" 4 hours ago Up 4 hours (healthy) 3306/tcp, 33060/tcp my-app-database
91893783bcb1 rabbitmq:3.7-management "docker-entrypoint.s…" 4 hours ago Up 4 hours 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp my-app-rabbitmq
63f551884bbf traefik:maroilles "/traefik --web --do…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp
My question is, I guess: how can I debug this? Am I missing something trivial?
Edit
Here is (part of) the content of the docker-compose.override.yml file:
version: '3.2'
services:
app:
volumes:
- ~/.ssh:/home/application/.ssh
- ~/.composer:/home/application/.composer
labels:
- "traefik.backend=my-app"
- "traefik.frontend.rule=Host:my-app.docker"
- "traefik.docker.network=proxy"
networks:
- internal
- proxy
environment:
PHP_DEBUGGER: xdebug
#XDEBUG_REMOTE_HOST: <your host IP address>
XDEBUG_REMOTE_PORT: 9000
XDEBUG_REMOTE_AUTOSTART: 1
XDEBUG_REMOTE_CONNECT_BACK: 1
XDEBUG_PROFILER_ENABLE: 1
XDEBUG_PROFILER_ENABLE_TRIGGER: 1000
traefik:
image: traefik
container_name: citizen-game-traefik
command: --web --docker --docker.domain=docker --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always
networks:
- internal
- proxy
rabbitmq:
networks:
- internal
- proxy
networks:
proxy:
external:
name: traefik
internal:
EDIT 2:
#Mostafa
I ran the following:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Result is:
172.18.0.7172.19.0.5
Trying these directly from the browser fails "This site can't be reached". I suppose it was to be expected.
I ran the following from inside the container:
bash-4.4# supervisorctl status apache:apached
apache:apached RUNNING pid 13575, uptime 0:00:00
As suggested, I used ss -plant | grep 80. This does not work from within the container. Here is the result when called outside of it:
[/var/www/html/my-app]$ ss -plant | grep 80
LISTEN 0 80 127.0.0.1:3306 0.0.0.0:*
ESTAB 0 0 192.168.1.88:39360 198.252.206.25:443 users:(("chromium-browse",pid=4203,fd=80))
SYN-SENT 0 1 192.168.1.88:50680 192.241.181.178:443 users:(("chromium-browse",pid=4203,fd=41))
LISTEN 0 128 *:80 *:*
LISTEN 0 128 *:8080 *:*
I'm not sure it tells much though. I tried to install ss from inside the container with apk but:
bash-4.4# apk add ss
ERROR: unsatisfiable constraints:
ss (missing):
required by: world[ss]
EDIT 3:
Here is the result of calling netstat:
bash-4.4# netstat -plant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 229/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:32843 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN 229/sshd
tcp 0 0 :::9000 :::* LISTEN 225/php-fpm.conf)
bash-4.4# netstat -plant | grep httpd
(nothing)
I'm not sure how much this helps though, since my other project, that works, yields the same result n bash-4.4# netstat -plant | grep httpd. Without the grep, it includes much more lines, though.
As the output that you have posted described the exposed ports 80,443,9000 for the container from this image webdevops/php-apache-dev:alpine
Then you can access the container using its IP directly from the browser. So first you need to ensure from the following:
Check if 172.18.0.7 is the actual IP of my-app-app container, use the following command to check the IP of your running container
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Or just docker inspect my-app-app to get all info about the container
Check the logs for my-app-app and you may need to enter the container itself and check if apache is actually running by executing the following supervisorctl command which will tell you about the status of apache service
$ supervisorctl status apache:apached
apache:apached RUNNING pid 72, uptime 0:07:43
If apache is running correctly then you should be able to browse the content using the container IP, in my case it gives me something like this as I don't have an actual application
Regarding your issue with traefik which is Bad Gateway that's because traefik itself cannot reach your backend service which is the my-app-app container in our case. you need to ensure that both traefik and my-app-app are within the same network or at least they can ping each other's IPs
Update:
Instead of ss it turns out the image contains netstat command, in order to check what port is used by apache you can do the following from inside the container:
# netstat -plant | grep httpd
tcp 0 0 :::80 :::* LISTEN 98/httpd
tcp 0 0 :::443 :::* LISTEN 98/httpd