I ve a service running under a stack swarm :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de74ba4d48c1 myregistry/myApi:1.0 "java -Dfile.encodin…" 3 minutes ago Up 3 minutes 8300/tcp myApiCtn
As you can see , my service is running on the 8300 port.
The probleme is that when i run curl ; it seems to not reply:
[user#server home]$ curl http://localhost:8300/api/elk/batch
curl: (52) Empty reply from server
In another side if i ran my container manually (without stack and without swarm services )
(docker run ...)
-> curl works well
My docker-compose is the following :
---
version: '3.4'
services:
api-batch:
image: myRegistry/myImageApi
networks:
- net_common
- default
stdin_open: true
volumes:
- /opt/application/current/logs:/opt/application/current/logs
- /var/opt/data/flat/flf/:/var/opt/data/flat/flf/
tty: true
ports:
- target: 8300
published: 8300
protocol: tcp
deploy:
mode: global
resources:
limits:
memory: 1024M
placement:
constraints:
- node.labels.type == test
healthcheck:
disable: true
networks:
net_common:
external: true
Where my networks list is the following :
NETWORK ID NAME DRIVER SCOPE
17795bfee9ca bridge bridge local
0faecb070730 docker_gwbridge bridge local
51c34d251495 host host local
j2nnf26asn3k ingress overlay swarm
3all3tmn3qn9 net_common overlay swarm
b7alw2yi5fk9 srcd-current_default overlay swarm
Any suggestion to make it work under swarm service ?
I create 3 virtual machine use docker-machine,there are:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster - virtualbox Running tcp://192.168.99.101:2376 v18.09.5
cluster2 - virtualbox Running tcp://192.168.99.102:2376 v18.09.5
master - virtualbox Running tcp://192.168.99.100:2376 v18.09.5
and then I create a docker swarm in master machine:
docker-machine ssh master "docker swarm init ----advertise-addr 192.168.99.100"
and in cluster and cluster2 join master:
docker-machine ssh cluster "docker swarm join --advertise-addr 192.168.99.101 --token xxxx 192.168.99.100:2377"
docker-machine ssh cluster2 "docker swarm join --advertise-addr 192.168.99.102 --token xxxx 192.168.99.100:2377"
the docker node ls info:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
r4a6y9wie4zp3pl4wi4e6wqp8 cluster Ready Active 18.09.5
sg9gq6s3k6vty7qap7co6eppn cluster2 Ready Active 18.09.5
xb6telu8cn3bfmume1kcektkt * master Ready Active Leader 18.09.5
there is deploy config swarm.yml:
version: "3.3"
services:
elasticsearch:
image: elasticsearch:7.0.0
ports:
- "9200:9200"
- "9300:9300"
environment:
- cluster.name=elk
- network.host=_eth1:ipv4_
- network.bind_host=_eth1:ipv4_
- network.publish_host=_eth1:ipv4_
- discovery.seed_hosts=192.168.99.100,192.168.99.101
- cluster.initial_master_nodes=192.168.99.100,192.168.99.101
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- backend
deploy:
mode: replicated
replicas: 3
#endpoint_mode: dnsrr
restart_policy:
condition: none
resources:
limits:
cpus: "1.0"
memory: "1024M"
reservations:
memory: 20M
networks:
backend:
# driver: overlay
# attachable: true
i pull elasticsearch image to virtual machie:
docker-machine ssh master "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster "docker image pull elasticsearch:7.0.0"
docker-machine ssh cluster2 "docker image pull elasticsearch:7.0.0"
before run i run this command fix some elasticearch bootstrap error:
docker-machine ssh master "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster "sudo sysctl -w vm.max_map_count=262144"
docker-machine ssh cluster2 "sudo sysctl -w vm.max_map_count=262144"
and then i run `docker stack deploy -c swarm.yml es, the elasticsearch cluster cannot work.
docker-machine ssh master
docker service logs es_elasticsearch -f
show:
es_elasticsearch.1.uh1x0s9qr7mb#cluster | {"type": "server", "timestamp": "2019-04-25T16:28:47,143+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "e8dba5562417", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{e8dba5562417}{Jy3t0AAkSW-jY-IygOCjOQ}{z7MYIf5wTfOhCX1r25wNPg}{10.255.0.46}{10.255.0.46:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.2.swswlwmle9e9#cluster2 | {"type": "server", "timestamp": "2019-04-25T16:28:47,389+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "af5d88a04b42", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{af5d88a04b42}{zhxMeNMAQN2evKDlsA33qA}{fpYPTvJ6STmyqrgxlMkD_w}{10.255.0.47}{10.255.0.47:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
es_elasticsearch.3.x8ouukovhh80#master | {"type": "server", "timestamp": "2019-04-25T16:28:48,818+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elk", "node.name": "0e7e4d96b31a", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [192.168.99.100, 192.168.99.101] to bootstrap a cluster: have discovered []; discovery will continue using [192.168.99.100:9300, 192.168.99.101:9300] from hosts providers and [{0e7e4d96b31a}{Xs9966RjTEWvEbuj4-ySYA}{-eV4lvavSHq6JhoW0qWu6A}{10.255.0.48}{10.255.0.48:9300}{ml.machine_memory=1037410304, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
I guess the cluster formation failed may be due to network configuration error. I don't know how to fix it, I try many times modify the config, fail and fail again.
try, this is working :) docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
hostname: "{{.Node.Hostname}}"
environment:
- node.name={{.Node.Hostname}}
- cluster.name=my-cluster
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=node1,node2,node3
- node.ml=false
- xpack.ml.enabled=false
- xpack.monitoring.enabled=false
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
memory: 4G
nginx:
image: nginx:1.17.1-alpine
ports:
- 9200:9200
deploy:
mode: global
command: |
/bin/sh -c "echo '
user nobody nogroup;
worker_processes auto;
events {
worker_connections 1024;
}
http {
client_max_body_size 4g;
resolver 127.0.0.11 ipv6=off;
server {
listen *:9200;
location / {
proxy_set_header Connection keep-alive;
set $$url http://elasticsearch:9200;
proxy_pass $$url;
proxy_set_header Host $$http_host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
}
}
}' | tee /etc/nginx/nginx.conf && nginx -t && nginx -g 'daemon off;'"
volumes:
elasticsearch-data:
Trying to manually specify all the specific IP's and bindings is tricky because of the swarm overlaying network.
Instead, simply make your ES nodes discoverable and let Swarm take care of the node discovery and communication. To make them discoverable, we can use a predictable name like the Swarm node hostname.
Try change your environment settings in the swarm.yml file as follows:
environment:
- network.host=0.0.0.0
- discovery.seed_hosts=elasticsearch #Service name, to let Swarm handle discovery
- cluster.initial_master_nodes=master,cluster,cluster2 #Swarm nodes host names
- node.name={{.Node.Hostname}} #To create a predictable node name
This of course assumes that we already known the swarm hostnames, which you pointed out in the screenshot above. Without knowing these values, we would have no way of having a predictable set of node names to look for. In that case, you could create 1 ES node entry with a particular node name, and then another entry which references the first entry's node name as the cluster.initial_master_nodes.
Use dnsrr mode without ports. Expose elasticsearch with nginx ;)
See my docker-compose.yml
In my experience https://github.com/shazChaudhry/docker-elastic works perfectly, and just one file from the entire repo is enough. I downloaded https://github.com/shazChaudhry/docker-elastic/blob/master/docker-compose.yml and removed the logstash bits, I didn't need that. Then added the following to .bashrc
export ELASTICSEARCH_HOST=$(hostname)
export ELASTICSEARCH_PASSWORD=foobar
export ELASTICSEARCH_USERNAME=elastic
export ELASTIC_VERSION=7.4.2
export INITIAL_MASTER_NODES=$ELASTICSEARCH_HOST
And docker stack deploy --compose-file docker-compose.yml elastic works.
Ideas I gleaned from Ahmet Vehbi Olgaç 's docker-compose.yml, which worked for me:
Use deployment / mode: global. This will cause the swarm to deploy one replica to each swarm worker, for each node that is configured like this.
Use deployment / endpoint_mode: dnsrr. This will let all containers in the swarm access the nodes by the service name.
Use hostname: {{.Node.Hostname}} or a similar template-based expression. This ensures a unique name for each deployed container.
Use environment / node.name={{.Node.Hostname}}. Again, you can vary the pattern. The point is that each es node should get a unique name.
Use cluster.initial_master_nodes=*hostname1*,*hostname2*,.... Assuming you know the hostnames of your docker worker machines. Use whatever pattern you used in #3, but substitute out the whole hostname, and include all the hostnames.
If you don't know your hostnames, you can do what Andrew Cachia's answer suggests: set up one container (do not replicate it) to act solely as the master seed and give it a predictable hostname, then have all other nodes refer to that node as the master seed. However, this introduces a single point of failure.
Elasticsearch 8.5.0 answer.
For my needs, I didn't want to add a reverse-proxy/load balancer, but I do want to expose port 9200 on the swarm nodes where Elasticsearch replicas are running (using just swarm), so that external clients can access the Elasticsearch REST API. So I used endpoint mode dnsrr (ref) and exposed port 9200 on the hosts where the replicas run.
If you don't need to expose port 9200 (i.e., nothing will connect to the elasticsearch replicas outside of swarm), remove the ports: config from the elasticsearch service.
I also only want elasticsearch replicas to run on a subset of my swarm nodes (3 of them). I created docker node label elasticsearch on those three nodes. Then mode: global and constraint node.labels.elasticsearch==True will ensure 1 replica runs on each of those nodes.
I run kibana on one of those 3 nodes too: swarm can pick which one, since port 5601 is exposed on swarm's ingress overlay network.
Lines you'll likely need to edit are maked with ######.
# docker network create -d overlay --attachable elastic-net
# cat elastic-stack-env
#!/bin/bash
export STACK_VERSION=8.5.0 # Elasticsearch and Kibana version
export ES_PORT=9200 # port to expose Elasticsearch HTTP API to the host
export KIBANA_PORT=5601 # port to expose Kibana to the host
read -p "Enter elastic user password: " ELASTIC_PASSWORD
read -p "Enter kibana_system user password: " KIBANA_PASSWORD
export KIBANA_URL=https://kibana.my-domain.com:$KIBANA_PORT #######
export SHARED_DIR=/some/nfs/or/shared/storage/elastic #######
export KIBANA_SSL_KEY_PATH=config/certs/kibana.key
export KIBANA_SSL_CERT_PATH=config/certs/kibana.crt
export ELASTIC_NODES=swarm_node1,swarm_node2,swarm_node3 #######
# ELASTIC_NODES must match what docker reports from {{.Node.Hostname}}
export KIBANA_SSL_CERT_AUTH_PATH=config/certs/My_Root_CA.crt #######
export CLUSTER_NAME=docker-cluster
export MEM_LIMIT=4294967296 # 4 GB; increase or decrease based on the available host memory (in bytes)
# cat elastic-stack.yml
version: "3.8"
services:
elasticsearch:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} ####### I have a local registry
deploy:
endpoint_mode: dnsrr
mode: global # but note constraints below
placement:
constraints:
- node.labels.elasticsearch==True
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
- /path/to/some/local/storage/elasticsearch:/usr/share/elasticsearch/data
ports: ##### remove if nothing outside of swarm needs to access port 9200
- target: 9200
published: ${ES_PORT} # we publish this port so that external clients can access the ES REST API
protocol: tcp
mode: host # required when using dnsrr
environment: # https://www.elastic.co/guide/en/elasticsearch/reference/master/settings.html
# https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#docker-configuration-methods
- node.name={{.Node.Hostname}} # see Andrew Cachia's answer
- cluster.name=${CLUSTER_NAME}
- discovery.seed_hosts=elasticsearch # use service name here, since (docker's) DNS is used:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#unicast.hosts
- cluster.initial_master_nodes=${ELASTIC_NODES} # use node.names here
# https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#initial_master_nodes
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
healthcheck:
test:
[ "CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
logging: # we use rsyslog
driver: syslog
options:
syslog-facility: "local2"
kibana:
# this service depends on the setup service (defined below), but docker stack has no
# way to specify dependencies, but more importantly, there's been a move away from this:
# https://stackoverflow.com/a/47714157/215945
image: localhost:5000/kibana:${STACK_VERSION:?} ######
hostname: kibana
deploy:
placement:
constraints:
- node.labels.elasticsearch==True # run KB on any one of the ES nodes
resources:
limits:
memory:
${MEM_LIMIT}
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
volumes:
- ${SHARED_DIR:?}/kibana:/usr/share/kibana/data
- ${SHARED_DIR:?}/certs:/usr/share/kibana/config/certs
ports:
- ${KIBANA_PORT}:5601
environment: # https://www.elastic.co/guide/en/kibana/master/settings.html
# https://www.elastic.co/guide/en/kibana/master/docker.html#environment-variable-config
# CAPS_WITH_UNDERSCORES must be used with Kibana
- SERVER_NAME=kibana
- ELASTICSEARCH_HOSTS=["https://elasticsearch:9200"]
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_PUBLICBASEURL=${KIBANA_URL}
# if you don't want to use https/TLS with Kibana, comment-out
# the next four lines
- SERVER_SSL_ENABLED=true
- SERVER_SSL_KEY=${KIBANA_SSL_KEY_PATH}
- SERVER_SSL_CERTIFICATE=${KIBANA_SSL_CERT_PATH}
- SERVER_SSL_CERTIFICATEAUTHORITIES=${KIBANA_SSL_CERT_AUTH_PATH}
- TELEMETRY_OPTIN=false
healthcheck:
test:
[
"CMD-SHELL",
"curl -sIk https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
logging:
driver: syslog
options:
syslog-facility: "local2"
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
deploy:
placement:
constraints:
- node.labels.elasticsearch==True
restart_policy: # https://docs.docker.com/compose/compose-file/compose-file-v3/#restart_policy
condition: none
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
dns: 127.0.0.11 # use docker DNS only (may not be required)
networks:
- elastic-net
command: >
bash -c '
until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 | grep -q "missing authentication credentials"
do
echo "waiting 30 secs for Elasticsearch availability..."
sleep 30
done
echo "setting kibana_system password"
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"
do
echo "waiting 10 secs before trying to set password again..."
sleep 10
done
echo "done"
'
logging:
driver: syslog
options:
syslog-facility: "local2"
networks:
elastic-net:
external: true
Deploy:
# . ./elastic-stack-env
# docker stack deploy -c elastic-stack.yml elastic
# # ... after Kibana comes up, you can remove the setup service if you want:
# docker service rm elastic_setup
Here's how I created the Elasticsearch CA and cert:
# cat elastic-certs.yml
version: "3.8"
services:
setup:
image: localhost:5000/elasticsearch:${STACK_VERSION:?} #######
volumes:
- ${SHARED_DIR:?}/certs:/usr/share/elasticsearch/config/certs
user: "0:0"
command: >
bash -c '
if [ ! -f certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: elasticsearch\n"\
" dns:\n"\
" - elasticsearch\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
fi;
sleep infinity
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
interval: 1s
timeout: 5s
retries: 120
# . ./elastic-stack-env
# docker stack deploy -c elastic-certs.yml elastic-certs
# # ... ensure files are created under $SHARED_DIR/certs, then
# docker stack rm elastic-certs
How I created the Kibana cert is outside the scope of this question.
I run a Fluent Bit swarm service (mode: global, docker network elastic-net) to send logs to the elasticsearch service. Although outside the scope of this question, here's the salient config:
[OUTPUT]
name es
match <whatever is appropriate for you here>
host elasticsearch
port 9200
index my-index-default
http_user fluentbit
http_passwd ${FLUENTBIT_PASSWORD}
tls on
tls.ca_file /certs/ca/ca.crt
tls.crt_file /certs/elasticsearch/elasticsearch.crt
tls.key_file /certs/elasticsearch/elasticsearch.key
retry_limit false
suppress_type_name on
# trace_output on
Host elasticsearch will be resolved by docker's DNS server to the three IP addresses of the elasticsearch replicas, so there is no single point of failure.
i applied docker tutorial to set up a swarm.
I used docker toolbox, because i'm on windows 10 Family.
i step all statements, but at the end, the statement "curl ip_adress" doesn't run. error also with access on url.
$ docker --version
Docker version 18.03.0-ce, build 0520e24302
docker-compose.yml, located in /home/docker of virtual machine called "myvm1" :
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: 12081981/friendlyhello:part1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
swarm :
$ docker-machine ssh myvm1 "docker stack ps getstartedlab"
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
blmx8mldam52 getstartedlab_web.1 12081981/friendlyhello:part1 myvm1 Running Running 9 seconds ago
04ctl86chp6o getstartedlab_web.2 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
r3qyznllno9j getstartedlab_web.3 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
2twwicjssie9 getstartedlab_web.4 12081981/friendlyhello:part1 myvm1 Running Running 9 seconds ago
o4rk4x7bb3vm getstartedlab_web.5 12081981/friendlyhello:part1 myvm3 Running Running 6 seconds ago
result of "docker-machine ls" :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm1 * virtualbox Running tcp://192.168.99.102:2376 v18.09.0
myvm3 - virtualbox Running tcp://192.168.99.103:2376 v18.09.0
test with curl
$ curl 192.168.99.102
curl: (7) Failed to connect to 192.168.99.102 port 80: Connection refused
How do i do to debug ?
I can give more information, if you want.
Thanks in advance.
Use of the routing mesh in Windows appears to be an EE only feature right now. You can monitor this docker for windows issue for more details. The current workaround is to use DNSRR internally and publish ports to the host directly instead of with the routing mesh. If you want your application to be reachable from any node in the cluster, this means you'd need to have a service on ever host in the cluster, scheduled globally, listening on the requested port. E.g.
version: "3.2"
services:
web:
# replace username/repo:tag with your name and image details
image: 12081981/friendlyhello:part1
deploy:
# global runs 1 on every node, instead of the replicated variant
mode: global
# DNSRR skips the VIP normally assigned to services
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- target: 80
published: 80
protocol: tcp
# host publishes the port directly from the container without the routing mesh
mode: host
networks:
- webnet
networks:
webnet:
Problem
I'm trying to set up JetBrains Hub, Youtrack, Upsource and Teamcity in a docker container and configure each to be available on their own IP (macvlan) at the default ports 80 redirected to 443 and 443 for HTTPS (so the port numbers do not show up in the browser).
However if I do that I get:
Could not listen on address 0.0.0.0 and port 443
Leaving the teamtools on their default ports 8080 and 8443 works or giving them ports over 2000 seems to work as well.
I checked with fuser 443/tcp and netstat -tulpn but there is nothing running on port 80 or 443. (had to install the packages for those in the container)
I tried setting the listening address to the NICs IP or 172.0.0.1 but this is refused as well:
root#teamtools [ /opt/teamtools ]# docker run --rm -it \
-v /opt/hub/data:/opt/hub/data \
-v /opt/hub/conf:/opt/hub/conf \
-v /opt/hub/logs:/opt/hub/logs \
-v /opt/hub/backups:/opt/hub/backups \
jetbrains/hub:2018.2.9840 \
configure --listen-address=192.168.1.211
* Configuring JetBrains Hub 2018.2
* Setting property 'listen-address' to '192.168.1.211' from arguments
[APP-WRAPPER] Failed to configure Hub: java.util.concurrent.ExecutionException: com.jetbrains.bundle.exceptions.BadConfigurationException: Could not listen on address {192.168.1.211} . Please specify another listen address in property listen-address
Question:
Why can I not set ports 80 and 443?
Why does it work for ports over
2000?
How can I make this work without a reverse proxy?
(reverse-proxy comes with a whole bunch of other issues, that I'm trying to avoid with this setup)
Setup
ESXi 6.7 Host
- vSwitch0 (Allow promiscuous mode: Yes)
- port group: VM Netork (Allow promiscuous mode: No)
- other VMs
- port group: Promiscuous Ports (Allow promiscuous mode: Yes)
- Teamtools VM (Photon OS 2.0, IP: 192.168.1.210)
- firewall based on: https://unrouted.io/2017/08/15/docker-firewall/
- docker/docker-compose
- hub (IP: 192.168.1.211:80/443)
- youtrack (IP: 192.168.1.212:80/443)
- upsource (IP: 192.168.1.213:80/443)
- teamcity-server (IP: 192.168.1.214:80/443)
- teamcity_db (MariaDB 10.3) (IP: 192.168.1.215:3306)
docker-compose.yml
version: '2'
networks:
macnet:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
services:
hub:
# set a custom container name so no more than one container can be created from this config
container_name: hub
image: "jetbrains/hub:2018.2.9840"
restart: unless-stopped
volumes:
- /opt/hub/data:/opt/hub/data
- /opt/hub/conf:/opt/hub/conf
- /opt/hub/logs:/opt/hub/logs
- /opt/hub/backups:/opt/hub/backups
- /opt/teamtools:/opt/teamtools
expose:
- "80"
- "443"
- "8080"
- "8443"
networks:
macnet:
ipv4_address: 192.168.1.211
domainname: office.mydomain.com
hostname: hub
environment:
- "JAVA_OPTS=-J-Djavax.net.ssl.trustStore=/opt/teamtools/certs/keyStore.p12 -J-Djavax.net.ssl.trustStorePassword=xxxxxxxxxxxxxx"
...
Upsource is running by user jetbrans, which is non-root.
https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html
I have been playing around with docker-in-docker (dind) setups and am running into a weird problem.
If I run a docker container separately inside dind and expose a port then I could connect to the port without any problems. For example, using the docker swarm visualizer inside dind:
/home/dockremap # docker run -d -p 8080:8080 dockersamples/visualizer:stable
/home/dockremap # wget localhost:8080
Connecting to localhost:8080 (127.0.0.1:8080)
index.html 100% |*********************** ....
However, if I run the same inside a swarm by deploying from a compose file it doesn't work.
Here is what my compose file looks like:
version: "3"
services:
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
and the commands I run:
/home/dockremap # docker swarm init
/home/dockremap # docker stack deploy -c compose.yaml test
now when I do wget I get connection refused error:
/home/dockremap # wget localhost:8080
Connecting to localhost:8080 (127.0.0.1:8080)
wget: can't connect to remote host (127.0.0.1): Connection refused
Should doing this sort of thing in dind be able to work by default, or is there something I need to configure? I am using docker 17.03.1-ce on Windows and here is what I get when I run docker info in dind:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 1
Server Version: 17.05.0-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: wz2r6iuyqztg3ivyk9fwsn976
Is Manager: true
ClusterID: mshadtrs0b1oayva2vrquf67d
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 172.17.0.2
Manager Addresses:
172.17.0.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.4.59-boot2docker
Operating System: Alpine Linux v3.5 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 987.1MiB
Name: 7e480e7313ae
ID: EL7P:NI2I:TOR4:I7IW:DPAB:WKYU:6A6J:NCC7:3K3E:6YVH:PYVB:2L2W
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled