Debugging a docker compose mariadb healthcheck - docker

I'm trying to add an healthcheck in a mariadb service but it then prevents the client connection to the server, showing the message:
ERROR 2005 (HY000): Unknown MySQL server host 'mysql' (0)
The connection can be established and everything runs fine without the following healthcheck:
healthcheck:
test: /usr/bin/learnintouch/db-health-check.sh
interval: 30s
timeout: 10s
retries: 3
... with the healthcheck behaving as expected:
root#428b1e6c7c1a:/usr/bin/learnintouch/www/folkuniversitet# /usr/bin/learnintouch/db-health-check.sh
+ /usr/bin/mysql/install/bin/mysql --protocol=tcp -h mysql -u root -pxxxxxx -e 'show databases;'
Warning: Using a password on the command line interface can be insecure.
+--------------------+
| Database |
+--------------------+
| db_engine |
| db_learnintouch |
| information_schema |
| mysql |
| performance_schema |
| test |
+--------------------+
The initial working service configuration is:
mysql:
image: localhost:5000/mariadb:10.1.24
environment:
- MYSQL_ROOT_PASSWORD=xxxxxx
networks:
- learnintouch
volumes:
- "~/dev/docker/projects/learnintouch/volumes/database/data:/usr/bin/mariadb/install/data"
- "~/dev/docker/projects/learnintouch/volumes/logs:/usr/bin/mariadb/install/logs"
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
The failing service configuration is:
mysql:
image: localhost:5000/mariadb:10.1.24
environment:
- MYSQL_ROOT_PASSWORD=xxxxxx
networks:
- learnintouch
volumes:
- "~/dev/docker/projects/learnintouch/volumes/database/data:/usr/bin/mariadb/install/data"
- "~/dev/docker/projects/learnintouch/volumes/logs:/usr/bin/mariadb/install/logs"
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
healthcheck:
test: /usr/bin/learnintouch/db-health-check.sh
interval: 30s
timeout: 10s
retries: 3
At server startup, the mariadb log is exactly the same, with or without the healthcheck, and ends with:
2017-12-29 10:27:48 139873749194560 [Note] /usr/bin/mariadb/install/bin/mysqld: ready for connections.
Version: '10.1.24-MariaDB' socket: '/usr/bin/mariadb/install/tmp/mariadb.sock' port: 3306 Source distribution
Here is the db-health-check.sh bash script:
#!/bin/bash -x
/usr/bin/mysql/install/bin/mysql --protocol=tcp -h mysql -u root -pxxxxxx -e "show databases;" || exit 1
I'm on Docker version:
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64
Experimental: false
UPDATE:
There were a few issues in fact, which together made the whole thing puzzling.
The first one was that the db-health-check.sh file was sitting in the client application container, and thus could not be found by the service database container, when this latter needed to perform its healthcheck.
The second one was that the db-health-check.sh file used a wrong path for its mysql client /usr/bin/mariadb/install/bin/mysql.
The third one was that the db-health-check.sh file used a wrong hostname in its connection, wrongly using mysql instead of localhost.
The fourth one was that the root user could not connect on the 1045 (28000): Access denied for user 'root'#'localhost' as it should not have any password as in /usr/bin/mariadb/install/bin/mysql --protocol=tcp -h localhost -u root -e "show databases;" || exit 1.
After correcting these points, the application can start and run fine even with the healthcheck in place.
The one thing that blurred the lines even further, was that the service is not available until the first healthcheck has passed successfully. Indeed I configured an heatlcheck as:
healthcheck:
test: exit 0
interval: 60s
timeout: 10s
retries: 3
with the container state showing this at first:
"Health": {
"Status": "starting",
"FailingStreak": 0,
"Log": []
}
and the application was stillnot able to connect to the service until after the first interval of 60s expired and the healthcheck being successful.
I should therefore have a short interval time so as to have the application not wait for too long to be available.

Related

Running ELK on docker, Kibana says: Unable to retrieve version information from Elasticsearch nodes

I was referring to example given in the elasticsearch documentation for starting elastic stack (elastic and kibana) on docker using docker compose. It gives example of docker compose version 2.2 file. So, I tried to convert it to docker compose version 3.8 file. Also, it creates three elastic nodes and has security enabled. I want to keep it minimal to start with. So I tried to turn off security and also reduce the number of elastic nodes to 2. This is how my current compose file looks like:
version: "3.8"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- xpack.security.enabled=false
deploy:
resources:
limits:
memory: 1g
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
# [
# "CMD-SHELL",
# # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
# ]
# Changed to:
test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
- es01
image: docker.elastic.co/kibana/kibana:8.0.0-amd64
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://localhost:9200
deploy:
resources:
limits:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
kibanadata:
driver: local
Then, I tried to run it:
docker stack deploy -c docker-compose.nosec.noenv.yml elk
Creating network elk_default
Creating service elk_es01
Creating service elk_kibana
When I tried to check their status, it displayed following:
$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3dcd08134e38 docker.elastic.co/kibana/kibana:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (health: starting) 5601/tcp elk_kibana.1.ng8aspz9krfnejfpsnqzl2sci
7b548a43c45c docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (healthy) 9200/tcp, 9300/tcp elk_es01.1.d9a107j6wkz42shti3n6kpfmx
I noticed that kibana's status gets stuck at (health: starting). When I checked Kibana's logs with command docker service logs -f elk_kibana, it had following WARN and ERROR lines:
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 127.0.0.1:9200
It seems that kibana is not able to connect with Elasticsearch, but why? Is it because of disabling of security and that we cannot have security disabled?
PS-1: Earlier, when I set elasticsearch host as follows in kibana's environment in the docker compose file:
ELASTICSEARCH_HOSTS=https://es01:9200 # that is 'es01' instead of `localhost`
it gave me following error:
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND es01
So, after checking this question, I changed es01 to localhost as specified earlier (that is in complete docker compose file content before PS-1.)
PS-2: Replacing localhost with 192.168.0.104 gives following error
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 192.168.0.104:9200
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. write EPROTO 140274197346240:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
Try this :
ELASTICSEARCH_HOSTS=http://es01:9200
I don't know why it can run in my PC, since Elasticsearch is supossed use SSL. But in your case using http working just fine.

Browser request hanging when running the latest Docker ColdFusion image

I installed the Docker release of ColdFusion with the following command:
docker pull eaps-docker-coldfusion.bintray.io/cf/coldfusion:latest
I then created a compose file:
version: "3.7"
services:
coldfusion:
image: eaps-docker-coldfusion.bintray.io/cf/coldfusion:latest
ports:
- "8500:8500"
networks:
coldfusion:
hostname: coldfusion
volumes:
- "~/dev/docker/projects/coldfusion/volumes/app:/app"
- "~/dev/docker/projects/coldfusion/volumes/logs:/opt/coldfusion/cfusion/logs"
environment:
acceptEULA: "YES"
password: "ColdFusion123"
enableSecureProfile: "false"
HOST_USER_ID: ${CURRENT_UID}
HOST_GROUP_ID: ${CURRENT_GID}
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
healthcheck:
test: curl --fail http://localhost:8500 || exit 1
interval: 1m
timeout: 3s
retries: 3
networks:
coldfusion:
name: coldfusion
common:
external: true
name: common
and started it with the command:
docker stack deploy --compose-file docker-compose-dev.yml coldfusion
The log shows:
stephane#stephane-pc:~/dev/docker/projects/coldfusion$ docker service logs -f coldfusion_coldfusion
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Updating webroot to /app
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Configuring virtual directories
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Updating password
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Skipping language updation
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Serial Key: Not Provided
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Previous Serial Key: Not Provided
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Starting ColdFusion
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Starting ColdFusion 2018 server ...
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | The ColdFusion 2018 server is starting up and will be available shortly.
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | ======================================================================
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | ColdFusion 2018 server has been started.
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | ColdFusion 2018 will write logs to /opt/coldfusion/cfusion/bin/../logs/coldfusion-out.log
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | ======================================================================
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | [000] Checking server startup status...
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | % Total % Received % Xferd Average Speed Time Time Time Current
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | External Addons: Disabled
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | External Session Storage: Disabled
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Skipping setup script invocation
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Secure Profile: Disabled
coldfusion_coldfusion.1.rlhixv3jctvm#stephane-pc | Cleaning up setup directories
But it hangs when typing the http://localhost:8500/ request in the browser.
The log remains empty:
tail -f volumes/logs/coldfusion-out.log
I created an index.cfm page in the /app directory:
hi
<cfset firstName = "World">
Hello <cfoutput>#firstName#</cfoutput>!
This CFML tutorial was designed for
<cfif firstName eq "World">
you!
<cfelse>
the world to see.
</cfif>
UPDATE: A 200 response comes back fine when using 127.0.0.1 instead of localhost
Opening the firewall ports does not change anything to the issue:
stephane#stephane-pc:~$ sudo ufw allow from 127.0.0.0 to any port 8500;
Rules updated
stephane#stephane-pc:~$ sudo ufw allow from any to any port 8500;
Rules updated
Rules updated (v6)
My host /etc/hosts file contains the line:
127.0.0.1 localhost
The nmap command responds:
stephane#stephane-pc:~/dev/docker/projects/coldfusion$ nmap -p 8500 localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-08 12:09 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00038s latency).
PORT STATE SERVICE
8500/tcp open fmtp
Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds
Your docker compose file shows
hostname: coldfusion
so shouldn't it be available at http://coldfusion:8500?
If it's docker compose v3, it should be
services:
dns:
hostname: 'your-domain'

Port 4466 already in use error after migrating from GraphQL Yoga to Apollo Server 2

I have a local app that had a backend of Prisma and GraphQL Yoga. I migrated from Yoga to Apollo Server 2 and believe I have the configuration set up correctly. However, when I go to 'run dev' I am getting an error that port 4466 is already in use.
I thought perhaps I needed to restart my docker images and did try that.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f14c004ae0d2 prismagraphql/prisma:1.34 "/bin/sh -c /app/sta…" 30 minutes ago Up 30 minutes 0.0.0.0:4466->4466/tcp backend_prisma_1
0c5f3517e990 mysql "docker-entrypoint.s…" 5 months ago Up 21 minutes 3306/tcp, 33060/tcp latinconexiones_mysql-db_1
This is my docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host: host.docker.internal
database: test_db
user: root
password: root
rawAccess: true
port: '8889'
migrations: false
How can I solve this? It feels like re-initializing Prisma with a different port may work, but that feels like overkill?
check with docker ps if any container uses that port, if so stop it if you don't need it, or change the port of your current container.
it may be also that a non-containerized app uses that port: check this with: sudo lsof -i -P -n | grep LISTEN | grep 4466

Setting Docker for laravel app I got errors in "compose/cli/main.py

In my Kubuntu 18.04 I try to run docker for my Laravel application
$ docker --version
Docker version 17.12.1-ce, build 7390fc6
I have 3 files:
.env:
# PATHS
DB_PATH_HOST=./databases
APP_PATH_HOST=./votes
APP_PTH_CONTAINER=/var/www/html/
docker-compose.yml:
version: '3'
services:
web:
build: ./web/Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
/web/Dockerfile.yml:
FROM php:7.2-apache
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
When I try to use docker-compose up --build, I get the following:
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up --build
Building web
Traceback (most recent call last):
File "bin/docker-compose", line 6, in <module>
File "compose/cli/main.py", line 71, in main
File "compose/cli/main.py", line 127, in perform_command
File "compose/cli/main.py", line 1052, in up
File "compose/cli/main.py", line 1048, in up
File "compose/project.py", line 466, in up
File "compose/service.py", line 329, in ensure_image_exists
File "compose/service.py", line 1047, in build
File "site-packages/docker/api/build.py", line 142, in build
TypeError: You must specify a directory to build in path
[6769] Failed to execute script docker-compose
I know that *.py that is python language files, but I do not use python language or work with it, I work with PHP.
Why is an error and how to fix it?
MODIFIED:
$ docker-compose up --build
Building webapp
Step 1/2 : FROM php:7.2-apache
---> a7d68dad7584
Step 2/2 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 519d1b33af81
Successfully built 519d1b33af81
Successfully tagged votes_docker_webapp:latest
Starting votes_docker_adminer_1 ...
Starting votes_docker_composer_1 ...
Starting votes_docker_adminer_1 ... error
votes_docker_db_1 is up-to-date
ERROR: for votes_docker_adminer_1 Cannot start service adminer: driver failed programming external connectivity on endpoint votes_docker_adminer_1 (6e94693ab8b1a990aaa83164df0952e8665f351618a72aStarting votes_docker_composer_1 ... done
ERROR: for adminer Cannot start service adminer: driver failed programming external connectivity on endpoint votes_docker_adminer_1 (6e94693ab8b1a990aaa83164df0952e8665f351618a72a5531f9c3ccc18a2e3d): Bind for 0.0.0.0:8080 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
I tried to check related ports and got:
# sudo netstat -ntpl | grep 8080:8080
# sudo netstat -ntpl | grep 0.0.0.0:8080
# sudo netstat -ntpl | grep 8080
tcp6 0 0 :::8080 :::* LISTEN 7361/docker-proxy
MODIFIED #2:
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up --build
Creating network "votes_docker_default" with the default driver
Building webapp
Step 1/2 : FROM php:7.2-apache
---> a7d68dad7584
Step 2/2 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 519d1b33af81
Successfully built 519d1b33af81
Successfully tagged votes_docker_webapp:latest
Creating votes_docker_adminer_1 ... done
Creating votes_docker_composer_1 ... done
Creating votes_docker_webapp_1 ... done
Creating votes_docker_db_1 ... done
Attaching to votes_docker_adminer_1, votes_docker_composer_1, votes_docker_webapp_1, votes_docker_db_1
adminer_1 | PHP 7.2.10 Development Server started at Mon Oct 15 10:14:02 2018
composer_1 | Composer could not find a composer.json file in /var/www/html
composer_1 | To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
votes_docker_composer_1 exited with code 1
webapp_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
webapp_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
webapp_1 | [Mon Oct 15 10:14:05.281793 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.10 configured -- resuming normal operations
webapp_1 | [Mon Oct 15 10:14:05.281843 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
db_1 | 2018-10-15T10:14:06.541323Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
db_1 | 2018-10-15T10:14:06.541484Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.11) starting as process 1
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | 2018-10-15T10:14:07.062202Z 0 [Warning] [MY-011071] [Server] World-writable config file './auto.cnf' is ignored.
db_1 | 2018-10-15T10:14:07.062581Z 0 [Warning] [MY-010107] [Server] World-writable config file './auto.cnf' has been removed.
db_1 | 2018-10-15T10:14:07.063146Z 0 [Warning] [MY-010075] [Server] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 0cd8212e-d063-11e8-8e69-0242ac140005.
db_1 | 2018-10-15T10:14:07.079020Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
db_1 | 2018-10-15T10:14:07.091951Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
db_1 | 2018-10-15T10:14:07.103829Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.infoschema#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103896Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103925Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103947Z 0 [Warning] [MY-010315] [Server] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104006Z 0 [Warning] [MY-010323] [Server] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104034Z 0 [Warning] [MY-010323] [Server] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104070Z 0 [Warning] [MY-010311] [Server] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.112700Z 0 [Warning] [MY-010330] [Server] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.112738Z 0 [Warning] [MY-010330] [Server] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.117764Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.11' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
Is it mysql misconfigure?
I run it not as root.
Also as I use LAMP do I need to stop apache and mysql before running docker-compose command?
MODIFIED #3:
After some searching I added mysql version in my config file and added command option:
image: mysql:5.7.23
command: --default-authentication-plugin=mysql_native_password --disable-partition-engine-check
and the above error was fixed.
So:
1. In other console under root I run commands (but I'm still not sure if I need it?)
sudo service apache2 stop
sudo service mysql stop
2. Under nonroot console I run with key to run in background:
docker-compose up -d
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose down
Stopping votes_docker_db_1 ... done
Stopping votes_docker_webapp_1 ... done
Stopping votes_docker_adminer_1 ... done
Removing votes_docker_db_1 ... done
Removing votes_docker_webapp_1 ... done
Removing votes_docker_composer_1 ... done
Removing votes_docker_adminer_1 ... done
Removing network votes_docker_default
docker-compose up -d
Creating network "votes_docker_default" with the default driver
Creating votes_docker_webapp_1 ... done
Creating votes_docker_adminer_1 ... done
Creating votes_docker_db_1 ... done
Creating votes_docker_composer_1 ... done
I have no errors in output, but I expected as a result to have vendor directory in my project, as I have in web/Dockerfile.yml:
FROM php:7.2-apache
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
But I do not see this directory...
Was the installation successful or not?
I'm not sure where to move next?
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker info
Containers: 33
Running: 2
Paused: 0
Stopped: 31
Images: 19
Server Version: 17.12.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
init version: v0.13.0 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-36-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.711GiB
Name: serge
ID: BDNU:HFWX:N6YV:IWYW:HJSU:SZ23:URPB:3FR2:7I3E:IFGK:AOLH:YRE5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
votes_docker_webapp latest 519d1b33af81 27 hours ago 378MB
adminer latest 0038b45402de 4 weeks ago 81.7MB
composer 1.6 e28b5b53ab28 4 weeks ago 154MB
php 7.2-apache a7d68dad7584 4 weeks ago 378MB
mysql 5.7.23 563a026a1511 5 weeks ago 372MB
mysql 5.7.22 6bb891430fb6 2 months ago 372MB
test2_php latest 05534d47f926 3 months ago 84.7MB
test1_php latest 05534d47f926 3 months ago 84.7MB
<none> <none> 6060fcf4d103 3 months ago 81MB
php fpm-alpine 601d5b3a95d4 3 months ago 80.6MB
php apache d9faf33e6e40 3 months ago 377MB
mysql latest 8d99edb9fd40 3 months ago 445MB
php 7-fpm 854ffd8dc9d8 3 months ago 367MB
php 7.2 e86d9bb526ef 3 months ago 367MB
ukfx/php apache-stretch 5958cb7c2316 4 months ago 648MB
nginx alpine bc7fdec94612 4 months ago 18MB
hello-world latest e38bc07ac18e 6 months ago 1.85kB
composer/composer latest 5afb0951f2a4 2 years ago 636MB
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8beea5dceca mysql:5.7.23 "docker-entrypoint.s…" 6 minutes ago Restarting (2) 6 seconds ago votes_docker_db_1
8309b5456dcf adminer "entrypoint.sh docke…" 6 minutes ago Up 6 minutes 0.0.0.0:8081->8080/tcp votes_docker_adminer_1
cc644206931b votes_docker_webapp "docker-php-entrypoi…" 6 minutes ago Up 6 minutes 0.0.0.0:8080->80/tcp votes_docker_webapp_1
How to fix it?
Thanks!
According to docker-compose documentation, build can be specified as a string containing a path to the build context if you have Dockerfile inside it.
You are using Dockerfile.yml file, which is not a default one (Dockerfile), so in this case context and dockerfile should be specified as well:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
The final docker-compose.yaml is:
version: '3'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
Addition to MODIFIED part:
Both web and adminer are configured to be allocated on port 8080 on a host system. That's why you have a conflict here. To resolve above issue you need to bind adminer to another port 8081 for example.
The final docker-compose.yaml is:
version: '3'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8081:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
Addition to MODIFIED3 part:
I see the following error with your composer docker container:
composer_1 | Composer could not find a composer.json file in /var/www/html
composer_1 | To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
Is it because you accidentally put ${DB_PATH_HOST} instead of ${APP_PATH_HOST} in the composer config?
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install

docker stack communicate between containers

I'm trying to setup a swarm using docker but I'm having issues with communicating between containers.
I have cluster with 5 nodes. 1 manager and 4 workers.
3 apps: redis, splash, myapp
myapp has to be on the 4 workers
redis, splash just on the manager
myapp has to be able to communicate with redis and splash
I tried using the container name but its not working. It resolves the container name to different IPs.
ping splash # return a different ip than the container actually has
I am deploying running the swarm using docker stack
docker stack deploy -c docker-stack.yml myapp
Linking container between them also doesn't work.
Any ideas ? Am I missing something ?
root#swarm-manager:~# docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker-stack.yml contains:
version: "3"
services:
splash:
container_name: splash
image: scrapinghub/splash
ports:
- 8050:8050
- 5023:5023
deploy:
mode: global
placement:
constraints:
- node.role == manager
redis:
container_name: redis
image: redis
ports:
- 6379:6379
deploy:
mode: global
placement:
constraints:
- node.role == manager
myapp:
container_name: myapp
image: myapp_image:latest
environment:
REDIS_ENDPOINT: redis:6379
SPLASH_ENDPOINT: splash:8050
deploy:
mode: global
placement:
constraints:
- node.role == worker
entrypoint:
- ping google.com
---- EDIT ----
I tried with curl also. Didn't work.
docker stack deploy -c docker-stack.yml myapp
Creating network myapp_default
Creating service myapp_splash
Creating service myapp_redis
Creating service myapp_myapp
curl http://myapp_splash:8050
curl: (7) Failed to connect to myapp_splash port 8050: No route to host
curl http://splash:8050
curl: (7) Failed to connect to splash port 8050: No route to host
What worked is getting the actual container name of splash, which is some random generated string.
curl http://myapp_splash.d7bn0dpei9ijpba4q41vpl4zz.tuk1cimht99at9g0au8vj9lkz:8050
But this doesn't really help me.
Ping is not the proper tool to try and connect services. For some reason it doesn't work with docker networks. Try curl http://serviceName instead.
Other than that: Containers can't be named when using stack deploy, instead your service name is used (which coincidentally is the same) to access another service.
I manage to get it working using curl http://tasks.splash:8050 or http://tasks.myapp_splash:8050.
I don't know whats is causing this issue though. Feel free to comment with an answer.
It seems that containers in stack named tasks.<service name> so the command ping tasks.myservice works for me!
Itersting point to note that names like <stackname>_<service name> will also resolve and ping'able but IP address is incorrect. This is frustarating.
(For exmple if you do docker stack deploy -c my.yml AA you'll get name like AA_myservice which will resolve to incorrect addreses)
To add to above answer. From network point of view curl and ping do the same things. Both will try to resolve name passed to them and then curl will try to connect using specified protocol (http is the example above) and ping will send ICMP echo requests.

Resources