Docker do not link virtual interfaces to virtual network(bridge) - docker

When create and start my all instances everything looks fine but even though do routing properly, my instances still cannot communicate each others. I have use this this command for each instance.
ip link set <interface> master <network>
eg
ip link set vethb3735ba#if14 master br-bdf6dd295e3a
Here is my operation system details
Linux arch 6.0.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Sat, 15 Oct 2022 14:00:49 +0000 x86_64 GNU/Linux
services:
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: amigoscode
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
eureka-server:
image: huseyinafsin/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
apigw:
image: huseyinafsin/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
customer:
image: huseyinafsin/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
fraud:
image: huseyinafsin/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
notification:
image: huseyinafsin/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:

Related

docker swarm phpmyadmin can't login to mysql server

So i have deploy my stack application and everything is working as expected. Three container replicas running. Now i access phpmyadmin and try to login to the mysql but i got the error: mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution
Both the phpmyadmin and mysql container are on the same network.
version: "3.9"
service:
db:
image: mysql
#container_name: mysql_db
command: --default-authentication-plugin=mysql_native_password
restart: always
secrets:
- mysql_root_password
- mysql_database
- mysql_user
- mysql_password
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root_password
MYSQL_DATABASE_FILE: /run/secrets/mysql_database
MYSQL_USER_FILE: /run/secrets/mysql_user
MYSQL_PASSWORD_FILE: /run/secrets/mysql_password
ports:
- "9906:3306"
networks:
- back-tier
volumes:
- alpine-db_backup:/var/lib/mysql
- alpine-mysql_logs:/var/log/mysql
- alpine-mysql_cnf:/etc/mysql
deploy:
replicas: 3
placement:
constraints: [node.role == manager]
resources:
reservations:
memory: 128M
limits:
memory: 256M
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 10
window: 60s
update_config:
parallelism: 1
delay: 10s
max_failure_ratio: 0.3
phpmyadmin:
image: phpmyadmin
#container_name: phpmyadmin
ports:
- 8080:80
environment:
PMA_HOST: db
PMA_PORT: 3306
PMA_ARBITRARY: 1
depends_on:
- db
networks:
- back-tier
- front-tier
deploy:
replicas: 2
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 10
networks:
front-tier:
driver: overlay
back-tier:
driver: overlay
For containers on the same network, to get another service's name resolved, you should use its name without the stack name as prefix. So, your PMA_HOST should be db, not titanfxbmp_db.
version: "3.9"
services:
db:
image: mysql
...
phpmyadmin:
image: phpmyadmin
...
environment:
PMA_HOST: db
PMA_PORT: 3306

schema-registry in KAFKA not able to retrieve Cluster ID

I try to install a Kafka environment based on confluent images. After "docker-compose up" all my containers are up and running, but after one minute the schema-registry failed
in the scheme-registry log I found this error message explaining that it failed to get the Kafka cluster Id
I checked the kafka logs and found this :
"[2021-08-05 15:59:17,074] INFO Cluster ID = ddchQ8odQM-hF67TJO97Ng (kafka.server.KafkaServer)"
So Cluster ID is well created. It seems that schema-registry is not able to retreive the Cluster ID but I really don't understand what happen here, I think it is a network issue, I tried many things to fix it but whithout success
here my docker-compose.yaml
services:
zookeeper:
image: confluentinc/cp-zookeeper
hostname: zookeeper
container_name: zookeeper
# networks:
# - my-network
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
kafka:
image: confluentinc/cp-kafka
container_name: kafka
depends_on:
- zookeeper
# networks:
# - my-network
ports:
- 9092:9092
- 30001:30001
environment:
# KAFKA_CREATE_TOPICS: toto
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
KAFKA_JMX_PORT: 30001
KAFKA_JMX_HOSTNAME: kafka
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
kafka-jmx-exporter:
build: ./materials/tools/prometheus-jmx-exporter
container_name: jmx-exporter
ports:
- 8080:8080
links:
- kafka
# networks:
# - my-network
environment:
JMX_PORT: 30001
JMX_HOST: kafka
HTTP_PORT: 8080
JMX_EXPORTER_CONFIG_FILE: kafka.yml
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
prometheus:
build: ./materials/tools/prometheus
container_name: prometheus
# networks:
# - my-network
ports:
- 9090:9090
spark-master:
container_name: spark-master
build: ./materials/spark
user: root
# networks:
# - my-network
volumes:
- ./materials/spark/connectors:/connectors
- ./materials/spark/scripts:/scripts/
- ./materials/consumer:/scripts/consumer
- ./secrets:/scripts/secrets
- ./materials/spark/jars_dir:/opt/bitnami/spark/.ivy2:z
ports:
- 8085:8080
- 7077:7077
- 4040:4040
environment:
- INIT_DAEMON_STEP=setup_spark
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
# - SPARK_MODE=master
# - SPARK_RPC_AUTHENTICATION_ENABLED=no
# - SPARK_RPC_ENCRYPTION_ENABLED=no
# - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
# - SPARK_SSL_ENABLED=no
spark-worker-1:
container_name: spark-worker-1
build: ./materials/spark
user: root
# networks:
# - my-network
depends_on:
- spark-master
ports:
- 8083:8085
- 4041:4040
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark-master:7077
- SPARK_WORKER_MEMORY=1G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
reservations:
cpus: "1.00"
memory: "1024M"
schema-registry:
image: confluentinc/cp-schema-registry
hostname : schema-registry
container_name : schema-registry
#command: /bin/sh -c 'tail -f /dev/null'
command: /bin/schema-registry-start /etc/schema-registry/schema-registry.properties
depends_on:
- kafka
ports:
- 8081:8081
# networks:
# - my-network
environment:
# SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:29092
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka-1:9092
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_DEBUG: "true"
SCHEMA_REGISTRY_KAFKASTORE.INIT.TIMEOUT.MS: 120000
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
producer:
build: ./materials/producer
container_name: producer
depends_on:
- kafka
# networks:
# - my-network
environment:
KAFKA_BROKER_URL: kafka-1:9092
TRANSACTIONS_PER_SECOND: 30
kafkastream:
build: ./materials/kafkastream
container_name: kafkastream
depends_on:
- kafka
# networks:
# - my-network
environment:
KAFKA_BROKER_URL: kafka-1:9092
TRANSACTIONS_PER_SECOND: 5
rest-proxy:
image: confluentinc/cp-kafka-rest
depends_on:
- kafka
- schema-registry
# networks:
# - my-network
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
#command: /bin/kafka-rest-start
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: kafka:29092
KAFKA_REST_LISTENERS: http://0.0.0.0:8082
KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081
#networks:
#my-network:
# external: false
# my-network:
My last try was to completly remove the network in the docker-compose file, that is why all the lines related to networks are commented here.
Any hint or idea will be appreciate
Thanks
I finally found the solution. My mystake was to add the following line in my docker-compose.yml file : "command: /bin/schema-registry-start /etc/schema-registry/schema-registry.properties". In that way, schema-registry start by taken into acount the default configuration of the schema-registry.properties file that is of course not suitable to my local installation and ignore all the environment parameter passed in the docke-compose.yaml file.
PLAINTEXT_HOST://localhost:9092 , change to kafka-1 or use kafka:29092

Set /etc/hostname in running container using docker-compose

My docker-compose.yml is as follow:
version: "3"
services:
write:
image: apachegeode/geode:1.11.0
container_name: write
hostname: a.b.net
expose:
- "8080"
- "10334"
- "40404"
- "1099"
- "7070"
ports:
- "10220:10334"
volumes:
- ./scripts/:/scripts/
command: /scripts/sleep.sh gfsh start locator ...
networks:
my-network:
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.50'
memory: 512M
restart_policy:
condition: on-failure
depends_on:
- read
read:
image: apachegeode/geode:1.11.0
container_name: read
hostname: a.b.net
expose:
- "8080"
- "10334"
- "40404"
- "1099"
- "7070"
ports:
- "10221:10334"
volumes:
- ./scripts/:/scripts/
command: /scripts/sleep.sh gfsh start locator ...
networks:
my-network:
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.50'
memory: 512M
restart_policy:
condition: on-failure
networks:
my-network:
container_name has to be "write" and "read" since they are unique containers but running on the host machine. Setting hostname: a.b.net in the docker-compose.yml sets 192.168.160.2 a.b.net a in /etc/hosts file but /etc/hostname show a which is only the alias name . How can I set /etc/hostname with a.b.net using docker-compose.yml ? I use
docker-compose -f my-docker-compose.yml up -d
to run the containers.

docker stack: Redis not working on worker node

I just completed the docker documentation and created two instances on aws (http://13.127.150.218, http://13.235.134.73). The first one is manager and the second one is the worker. Following is the composed file I used to deploy
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
Here the redis service has the constraint that restricts it to run only on manager node. Now my question is how the web service on worker instance is supposed to use the redis service.
You need to use the hostname parameter in all container, so you can use this value to access services from worker or to access from worker the services on manager.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
hostname: "web"
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
hostname: "visualizer"
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
hostname: "redis"
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
In addictional if you use the portainer instead of visualizer you can control you SWARM stack with more options:
https://hub.docker.com/r/portainer/portainer
BR,
Carlos
Consider the stack file as per the below example -
Regardless of where it is placed manager|worker all the services in the stack file being on the same network can use the embedded DNS functionality which helps to resolve each service by the service name defined.
In this case the service web makes use of service redis by its service name.
Here is an example of the ping command able to resolve the service web from within the container associated with the redis service -
Read more about the Swarm Native Service Discovery to understand this.

Docker Compose 3 controlling resources (memory, cpu)

I'm trying to use "resources" field from docker compose version 3 documentation (https://docs.docker.com/compose/compose-file/), however, I'm facing an error,
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.fstore_java: 'resources'
How can I set the memory limit with docker-compose?
fstore_java:
depends_on:
- fstore_db
- rabbit_broker
build: ./fstore
ports:
- "8080:8080"
expose:
- "8080"
links:
- fstore_db
- rabbit_broker
restart: always
resources:
limits:
cpus: '0.001'
memory: 50M
It has to be under "deploy" level
fstore_java:
depends_on:
- fstore_db
- rabbit_broker
build: ./fstore
ports:
- "8080:8080"
expose:
- "8080"
links:
- fstore_db
- rabbit_broker
restart: always
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M

Resources