How to connect the application container to the mysql containier - docker

I am having an application service and a MySQL service but I am not able to connect the two containers and it keeps returning me this error
jango.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
I have included the links in my application service but nothing is working out.
Mine MySQL container is working up fine and even I can log into the MySQL container.
Here is the snapshot of the services:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc26d09a81d1 gmasmatrix_worker:latest "/entrypoint.sh /sta…" 17 seconds ago Exited (1) 11 seconds ago gmasmatrix_celeryworker_1
749f23c37b16 gmasmatrix_application:latest "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 9 seconds ago gmasmatrix_application_1
666029ad063a gmasmatrix_flower "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 10 seconds ago gmasmatrix_flower_1
50ac0497e66b mysql:5.7.10 "/entrypoint.sh mysq…" 21 seconds ago Up 17 seconds 0.0.0.0:3306->3306/tcp gmasmatrix_db_1
669fbbe0a81d mailhog/mailhog:v1.0.0 "MailHog" 21 seconds ago Up 18 seconds 1025/tcp, 0.0.0.0:8025->8025/tcp gmasmatrix_mailhog_1
235a46c8d453 redis:5.0 "docker-entrypoint.s…" 21 seconds ago Up 17 seconds 6379/tcp gmasmatrix_redis_1
Docker-compose file
version: '2'
services:
application: &application
image: gmasmatrix_application:latest
command: /start.sh
volumes:
- .:/app
# env_file:
# - .env
ports:
- 8000:8000
# cpu_shares: 874
# mem_limit: 1610612736
# mem_reservation: 1610612736
build:
context: ./
dockerfile: ./compose/local/application/Dockerfile
args:
- GMAS_ENV_TYPE=local
links:
- "db"
celeryworker:
<<: *application
image: gmasmatrix_worker:latest
depends_on:
- redis
- mailhog
ports: []
command: /start-celeryworker
links:
- "db"
flower:
<<: *application
image: gmasmatrix_flower
ports:
- "5555:5555"
command: /start-flower
links:
- "db"
mailhog:
image: mailhog/mailhog:v1.0.0
ports:
- "8025:8025"
redis:
image: redis:5.0
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: gmas_mkt
MYSQL_ROOT_PASSWORD: pulkit1607
ports:
- "3306:3306"
``

Your application is trying to connect to 127.0.0.1 - which in docker points to the app container itself.
Instead you should use the IP of the db container. You can utilize the built-in docker DNS service to do this. In your application configuration, use db (the name of the mysql container) as the host to connect to instead of localhost or 127.0.0.1

Related

Kibana can't connect to ElasticSearch with docker on Mac M1

Basically I'm trying to setup a environment with elasticsearch and kibana with docker on a m1 mac. I've setup the de env variable DOCKER_DEFAULT_PLATFORMto linux/amd64. Everything seems fine on running the container but when I try to connect kibana to elastic they just can't see each other. This is my current docker-composer file:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.3.3-amd64
environment:
- discovery.type=single-node
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms128M -Xmx128M"
ports:
- 9200:9200
networks:
- my-network
kibana:
image: docker.elastic.co/kibana/kibana:8.3.3-amd64
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200/
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- my-network
Before that I was using links insted of networks, no luck with that either. From my terminal or browser I can see both elastic and kibana running on their respective ports. I'm without ideas here, appreciate any help!
EDIT
docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
265023669cfd docker.elastic.co/kibana/kibana:8.3.3-amd64 "/bin/tini -- /usr/l…" 14 minutes ago Up 14 minutes 0.0.0.0:5601->5601/tcp folha3_kibana_1
48ee37663dda docker.elastic.co/elasticsearch/elasticsearch:8.3.3-amd64 "/bin/tini -- /usr/l…" 14 minutes ago Up 14 minutes 0.0.0.0:9200->9200/tcp, 9300/tcp folha3_elasticsearch_1
6b1f6dd9473f redis "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 0.0.0.0:6379->6379/tcp folha3_redis_1
2a3ade65634a mysql:5.7 "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp folha3_mysql_1
ELASTICSEARCH_URL: http://localhost:9200/ should be http://my-network:9200/, localhost can't access elasticsearch container.

Docker compose will not pull all images from compose file

I'm using docker with docker-compose.yml file.
There I put two different services inside, which I'd like to update.
Moreover I ran portainer and added also some other services there:
pi#raspberrypi:~/docker $ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec830e789d38 nodered/node-red:latest "npm --no-update-not…" 8 days ago Up 6 minutes (healthy) 0.0.0.0:1880->1880/tcp, :::1880->1880/tcp docker_node-red_1
15aa942b2b94 openhab/openhab:3.1.1 "/entrypoint gosu op…" 8 days ago Up 8 days (healthy) docker_openhab_1
e805e3f527c4 portainer/portainer-ce "/portainer" 8 days ago Up 8 days 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp portainer
80990d1ad7e7 influxdb:latest "/entrypoint.sh infl…" 9 months ago Up 8 days InfluxDB
My actual docker-compose.yml file looks like this:
pi#raspberrypi:~/docker $ cat docker-compose.yml
version: "2"
services:
openhab:
image: "openhab/openhab:3.1.1"
restart: always
network_mode: host
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "./openhab_addons:/openhab/addons"
- "./openhab_conf:/openhab/conf"
- "./openhab_userdata:/openhab/userdata"
environment:
USER_ID: "1000"
GROUP_ID: "1000"
OPENHAB_HTTP_PORT: "8080"
OPENHAB_HTTPS_PORT: "8443"
EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Berlin"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
networks:
- node-red-net
volumes:
- node-red-data:/data
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
volumes:
node-red-data:
networks:
node-red-net:
In order to update the openhab container from 3.1.1 to 3.2.0, I changed the image name inside compose file to openhab/openhab:3.2.0.
Afterwards I started docker-compose pull and the system only checked if there is a new image for node-red available. But not for openhab.
What is wrong?
You need to put all the services under a single services key. That's also why it's plural.
services:
openhab:
...
node-red:
...

Docker - Symfony5/Mercure : Impossible to reach mercure hub

I try with no success to access to Mercure's hub through my browser at this URL :
http://locahost:3000 => ERR_CONNECTION_REFUSED
I use Docker for my development. Here's my docker-compose.yml :
# docker/docker-compose.yml
version: '3'
services:
database:
container_name: test_db
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3309:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
container_name: test_php
build:
context: ./php-fpm
depends_on:
- database
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ./src:/var/www
nginx:
container_name: test_nginx
build:
context: ./nginx
volumes:
- ./src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "8095:80"
caddy:
container_name: test_mercure
image: dunglas/mercure
restart: unless-stopped
environment:
MERCURE_PUBLISHER_JWT_KEY: '!ChangeMe!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeMe!'
PUBLISH_URL: '${MERCURE_PUBLISH_URL}'
JWT_KEY: '${MERCURE_JWT_KEY}'
ALLOW_ANONYMOUS: '${MERCURE_ALLOW_ANONYMOUS}'
CORS_ALLOWED_ORIGINS: '${MERCURE_CORS_ALLOWED_ORIGINS}'
PUBLISH_ALLOWED_ORIGINS: '${MERCURE_PUBLISH_ALLOWED_ORIGINS}'
ports:
- "3000:80"
I have executed successfully :
docker-compose up -d
docker ps -a :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e4a72fe75b2 dunglas/mercure "caddy run --config …" 2 hours ago Up 2 hours 443/tcp, 2019/tcp, 0.0.0.0:3000->80/tcp, :::3000->80/tcp test_mercure
724fe920ebef nginx "/docker-entrypoint.…" 3 hours ago Up 3 hours 0.0.0.0:8095->80/tcp, :::8095->80/tcp test_nginx
9e63fddf50ef php-fpm "docker-php-entrypoi…" 3 hours ago Up 3 hours 9000/tcp test_php
e7989b26084e database "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3309->3306/tcp, :::3309->3306/tcp test_db
I can reach http://localhost:8095 to access to my Symfony app but I don't know on which URL I am supposed to reach my Mercure's hub.
Thank's for your help !
I tried for months to get symfony + nginx + mysql + phpmyadmin + mercure + docker to work both locally for development and in production (obviously). To no avail.
And, while this isn't directly answering your question. The only way I can contribute is with an "answer", as I don't have enough reputation to only comment, or I would have done that.
If you're not tied to nginx for any reason besides a means of a web server, and can replace it with caddy, I have a repo that is symfony + caddy + mysql + phpmyadmin + mercure + docker that works with SSL both locally and in production.
https://github.com/thund3rb1rd78/symfony-mercure-website-skeleton-dockerized

Docker http://localhost:8000 it is not working

I am new to Docker and tried to install WordPress in Docker
Details are :
OS : Windows 10 Home
Docker Version : Docker version 19.03.1, build 74b1e89e8a
docker-compose version 1.24.1, build 4667896b
I done following steps
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysql
MYSQL_USER: admin
MYSQL_PASSWORD: admin
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: admin
WORDPRESS_DB_PASSWORD: admin
WORDPRESS_DB_NAME: admin
volume:
db_data: {}
It is working fine
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
944e0ea8ff69 wordpress:latest "docker-entrypoint.s…" 34 minutes ago Up 34 minutes 0.0.0.0:8000->80/tcp wordpress_wordpress_1
5a3890fed7fe redis "docker-entrypoint.s…" 45 minutes ago Up 45 minutes 6379/tcp sleepy_matsumoto
4edf3f9fc944 mysql:5.7 "docker-entrypoint.s…" About an hour ago Up 38 minutes 3306/tcp, 33060/tcp wordpress_db_1
But when I am running http://localhost:8000 or http://localhost:80 in browser it is showing site is not reachable. How can I resolve this?
I got an answer for this problem details are as under as I have installed docker using Docker Toolbox
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v19.03.5
Now if I do
http://192.168.99.100:8000/
Now finally it is working. Hopefully it will help you too

Can not connect to kafka with conduktor?

I installed Kafka on a VM Ubuntu 18.0.4 with following compose file
version: '2'
networks:
kafka-net:
driver: bridge
services:
zookeeper-server:
image: 'bitnami/zookeeper:latest'
networks:
- kafka-net
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka-server1:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9092:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafka-server2:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9093:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9093
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
It installed without any problem.
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39f38caf57cb bitnami/kafka:latest "/entrypoint.sh /run…" 3 hours ago Up 5 minutes 0.0.0.0:9092->9092/tcp kafka_kafka-server1_1
088a703b5b76 bitnami/kafka:latest "/entrypoint.sh /run…" 3 hours ago Up 3 hours 0.0.0.0:9093->9092/tcp kafka_kafka-server2_1
6a754bda47ea bitnami/zookeeper:latest "/entrypoint.sh /run…" 3 hours ago Up 3 hours 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp kafka_zookeeper-server_1
Now, I want to connect to my Kafka on my VM with the following setting:
I test it from localhost with the following
root#ubuntu:~# kafkacat -b 192.168.179.133:9092 -L
Metadata for all topics (from broker -1: 192.168.179.133:9092/bootstrap):
1 brokers:
broker 1001 at localhost:9092
0 topics:
But in my windows 10 I can not connect to 192.168.179.133:9092 with Conduktor
As you see it returns error.
Test ZK is OK but Test kafka Connectivity raise the error !
You should change KAFKA_CFG_ADVERTISED_LISTENERS if your conductor is not installed in the same machine as Kafka cluster installed.
It should be like this for kafka-server1:
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.179.33:9092
and kafka-server2:
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.179.33:9093
Note: You should consider to add both kafka servers in conductor for redundancy.
You can check this for more information.

Resources