How can I connect from `project` to `mysql` container in docker swarm? - docker

I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?

Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.

Related

Docker-compose network : how to connect to other container?

Im running zabbix with docker compose with multiple containers.
I have an issue with connecting two containers to each other (see : docker containerized zabbix server monitoring same host running the zabbix server : connection refused ).
So im wondering how connection between containers works in docker-compose : do i need to use links in the docker-compose.yml ? Do I need to specify an ip adress in network in docker-compose.yml and then use this ip adress in my apps?
In particular, if i want to connect to container A ip named containerA in docker-compose.yml to container B named containerB in docker-compose.yml, can I use container name as it appears in docker ps -a ? (the container name is often not the same as the container name in docker-compose.yml) Or should I use the service name as it appears in docker-compose.yml? Or should I use links service:alias so i can use the alias in my app?
I have tried to use links but I had a circular link problem as i was linking to container to each other.
This is the yml file (notice the network alias is the same as the first service name...):
version: '3.5'
services:
zabbix-server:
container_name: zabbixserver
image: zabbix/zabbix-server-pgsql:centos-6.0-latest
ports:
- "10051:10051"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/usr/lib/zabbix/alertscripts:/usr/lib/zabbix/alertscripts:ro
- ./zbx_env/usr/lib/zabbix/externalscripts:/usr/lib/zabbix/externalscripts:ro
- ./zbx_env/var/lib/zabbix/export:/var/lib/zabbix/export:rw
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:rw
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
- ./zbx_env/var/lib/zabbix/mibs:/var/lib/zabbix/mibs:ro
- snmptraps:/var/lib/zabbix/snmptraps:rw
# - ./env_vars/.ZBX_DB_CA_FILE:/run/secrets/root-ca.pem:ro
# - ./env_vars/.ZBX_DB_CERT_FILE:/run/secrets/client-cert.pem:ro
# - ./env_vars/.ZBX_DB_KEY_FILE:/run/secrets/client-key.pem:ro
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
deploy:
resources:
limits:
cpus: '0.70'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
env_file:
- ./env_vars/.env_db_pgsql
- ./env_vars/.env_srv
secrets:
- POSTGRES_USER
- POSTGRES_PASSWORD
depends_on:
- postgres-server
networks:
zbx_net_backend:
aliases:
- zabbix-server
- zabbix-server-pgsql
- zabbix-server-centos-pgsql
- zabbix-server-pgsql-centos
zbx_net_frontend:
# devices:
# - "/dev/ttyUSB0:/dev/ttyUSB0"
stop_grace_period: 30s
sysctls:
- net.ipv4.ip_local_port_range=1024 65000
- net.ipv4.conf.all.accept_redirects=0
- net.ipv4.conf.all.secure_redirects=0
- net.ipv4.conf.all.send_redirects=0
labels:
com.zabbix.description: "Zabbix server with PostgreSQL database support"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-server"
com.zabbix.dbtype: "pgsql"
com.zabbix.os: "centos"
zabbix-agent:
image: zabbix/zabbix-agent:centos-6.0-latest
ports:
- "10050:10050"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
mode: global
links:
- zabbix-server:zabbix-server
env_file:
- ./env_vars/.env_agent
privileged: true
pid: "host"
networks:
zbx_net_backend:
aliases:
- zabbix-agent
- zabbix-agent-passive
- zabbix-agent-centos
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "centos"
Use the other container's Compose service name and the port the process inside that container is listening on. In your example, assuming the second ports: numbers are both correct, both containers should be able to access zabbix-server:10051 and zabbix-agent:10050. Also see Networking in Compose in the Docker documentation.
Do I need to use links in docker-compose.yml?
The links: option is obsolete and you should delete it if present. expose: is similarly only used by the obsolete first-generation Docker networking, and there are no consequences to deleting it from your Compose file.
Do I need to specify an IP address in networks in docker-compose.yml?
No, Docker can assign container-private IP addresses on its own. These are an internal implementation detail of Docker. It's useful to know they exist (in particular, since each container has a private IP address, multiple containers can each use the same port internally) but you never need to directly specify them or look them up.
You rarely if ever need to specify networks: { aliases: } or to override a container's generated container_name:. The docker ps names won't match what's in the Compose file but that's not a practical problem. If you need to directly manage an individual container you can e.g. docker-compose stop zabbix-server, and as previously described you can use the Compose service names for container-to-container communication.
In fact, for most practical cases, you can delete all of the networks: blocks entirely. Compose provides a network named default for you, and you don't usually need to configure anything.
So, in the file you originally show, I'd suggest deleting all of the networks:, links:, and container_name: options. The ports: are required only if you want to call into these containers from outside of Docker. Even deleting these you can use the host names and ports shown initially.

ERROR: Pool overlaps with other one on this address space

I'm trying to implement this tutorial. The "docker-compose" content is this :
# WARNING: Do not deploy this tutorial configuration directly to a production environment
#
# The tutorial docker-compose files have not been written for production deployment and will not
# scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
# goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
# are running at full debug and extra ports have been exposed to allow for direct calls to services.
# They also contain various obvious security flaws - passwords in plain text, no load balancing,
# no use of HTTPS and so on.
#
# This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
# and so on, purely so that a single docker-compose file can be read as an example to build on,
# not use directly.
#
# When deploying to a production environment, please refer to the Helm Repository
# for FIWARE Components in order to scale up to a proper architecture:
#
# see: https://github.com/FIWARE/helm-charts/
#
version: "3.5"
services:
# Orion is the context broker
orion:
image: fiware/orion:latest
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
- default
expose:
- "1026"
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
interval: 5s
# Tutorial displays a web app to manipulate the context directly
tutorial:
image: fiware/tutorials.context-provider
hostname: iot-sensors
container_name: fiware-tutorial
networks:
- default
expose:
- "3000"
- "3001"
ports:
- "3000:3000"
- "3001:3001"
environment:
- "DEBUG=tutorial:*"
- "PORT=3000"
- "IOTA_HTTP_HOST=iot-agent"
- "IOTA_HTTP_PORT=7896"
- "DUMMY_DEVICES_PORT=3001"
- "DUMMY_DEVICES_API_KEY=4jggokgpepnvsb2uv4s40d59ov"
- "DUMMY_DEVICES_TRANSPORT=HTTP"
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: fiware-iot-agent
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
- "7896"
ports:
- "4041:4041"
- "7896:7896"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo-db"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=iotagentul"
- "IOTA_HTTP_PORT=7896"
- "IOTA_PROVIDER_URL=http://iot-agent:4041"
# Database
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
healthcheck:
test: |
host=`hostname --ip-address || echo '127.0.0.1'`;
mongo --quiet $host/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' && echo 0 || echo 1
interval: 5s
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mongo-db: ~
But when I run the docker compose with the command "docker-compose up -d" I get this error :
*WARNING: The host variable is not set. Defaulting to a blank string.
Creating network "fiware_default" with the default driver
ERROR: Pool overlaps with other one on this address space*
I also get these networks by running the command "docker network ls" :
*NETWORK ID NAME DRIVER SCOPE
78403834b9bd bridge bridge local
1dc5b7d0534b hadig_default bridge local
4162244c37b0 host host local
ac5a94a89bde none null local*
I see no conflict with the name "fiware_default". where is the problem?
The "pool" the error message refers to is the 172.18.1.0/24 CIDR block that file manually specifies. If something else on your system is using that network space, it won't start up. (Docker might have assigned another Compose file's network to 172.18.0.0/16, for example.)
You don't usually need to manually specify IP addresses in Docker at all, and so you should remove that ipam: block. Having done that, you're telling Compose to configure the default network with default settings, and you can actually remove the entire networks: block at the end of the file.
The exception to this is if your host network environment is using some of the same IP address blocks, and then you do potentially need an override like this. If you run ifconfig or a similar command from the host (or look at your host's network settings from a desktop application) and your host or a VPN is using a 172.18.1.* address, you'll also get this message. In that case, change the network to something else; if you only need a /24 (254 addresses) then setting subnet: 192.168.123.0/24 (where "123" can be any number between 1 and 254) should get you past this.

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

Creating app and search containers to run in AWS ECS/Fargate

In our situation we want to run an app container and a search container as separate services on our ECS cluster.
I need to create a search container to run under ECS/Fargate and linked to a load balancer.
I need to create an app container which is able to talk to our PostgreSQL RDS instance, which already has the fusionauth tables setup, and talk to the search container through the load balancer.
I started with the docker-compose.yaml and deleted the db service. I changed the values in the fusionauth section to
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
The first issue is that when i run this container, it goes into maintenance mode and tries to create the database. I get a locale error. I don't need maintenance mode, I just need it to connect to the database. So I think I must have the database url defined incorrectly.
The second problem, is I need to be able to do the same thing for search: create a container that runs under ECS/Fargate and accessed through the load balancer.
I am no docker expert (yet). But I can't find any specific documentation to help me figure out how to configure and deploy the search and the app containers.
Any pointers to existing docs, or help is appreciated to get this running.
I know i have to change the search section in the docker-compose file (posted entirely below) but I don't yet know what to change, or how to build the container for search yet.
Entire docker-compose file as it stands right now.
version: '3'
services:
search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- 9200:9200
- 9300:9300
networks:
- search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- search
environment:
DATABASE_URL: jdbc:postgresql://mypostgre-rds-endpoint:5432/fusionauth
DATABASE_ROOT_USER: root_user_account
DATABASE_ROOT_PASSWORD: root_user_password
DATABASE_USER: fusionauth
DATABASE_PASSWORD: fusionauth_user_password
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://load-balancer-url:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
networks:
db:
driver: bridge
search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
AFAICT there is no reason you should be using DATABASE_ROOT_USER and DATABASE_USER if the db is already setup
I would suggest you could start by removing that but other than that it looks pretty similar to a docker-compose setup I've been using for a while
The only other thing I'd add is this problem has nothing to do with ECS or Fargate at all as it sits, its really just a docker compose file you are having trouble getting running from what I can tell

Setting up IPFS Cluster on docker environment

I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode

Resources