Maria DB docker Access denied for user 'root'#'localhost' - docker

I'm using a MariaDB docker image and keep getting the Warning:
[Warning] Access denied for user 'root'#'localhost' (using password: NO)
This is my docker file used to create the image:
FROM mariadb:10.6.4
ENV MYSQL_ROOT_PASSWORD pw
ENV MARIADB_DATABASE db1
ENV MARIADB_USER user1
ENV MARIADB_PASSWORD user1pw
I start the container in docker-compose :
mariadb:
restart: always
image: mariadb_image
container_name: mariadb_container
build: topcat_mariadb/.
ports:
- 2306:3306
and I access the container from another container using:
mysql+pymysql://user1:user1pw#mariadb_container:3306/db1
Most of the solutions google found was running some kind of SQL statement, but as I'm running a docker container, I need an ephemeral solution.
Anyone know how to fix this in docker?
-----EDIT-----
I had forgotten to add the health check section to the dockercompose file:
mariadb:
restart: always
image: mariadb_image
container_name: mariadb_container
build: topcat_mariadb/.
ports:
- 2306:3306
healthcheck:
test: ["CMD-SHELL", 'mysqladmin ping']
interval: 10s
timeout: 2s
retries: 10

Update
Based on the update to your question, you're trying to run the mysqladmin ping command inside the container. mysqladmin is attempting to connect as the root user, but authenticating to your database server requires a password.
You can provide a password to mysqladmin by:
Using the -p command line option
Using the MYSQL_PWD environment variable
Creating a credentials file
If we move the root password out of your image, and instead set it at runtime, we can write your docker-compose.yml file like this:
version: "3"
services:
mariadb:
restart: always
image: mariadb_image
container_name: mariadb_container
build: topcat_mariadb/.
environment:
- "MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD"
- "MYSQL_PWD=$MYSQL_ROOT_PASSWORD"
healthcheck:
test: ["CMD-SHELL", 'mysqladmin ping']
interval: 10s
timeout: 2s
retries: 10
And then in our .env file we can set:
MYSQL_ROOT_PASSWORD=pw1
Now after the container starts up we see that the container is
healthy:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1c9c9f787e6 mariadb_image "docker-entrypoint.s…" 28 seconds ago Up 27 seconds (healthy) 3306/tcp mariadb_container
As a side note, it's not clear from this example why you're bothering to build a custom image: it's better to set the environment variables at runtime, rather than creating an image with "baked-in" credentials.
Previous answer
I can't reproduce your problem when using the mysql client or Python code. Given the following docker-compose.yml:
version: "3"
services:
mariadb:
restart: always
image: mariadb_image
container_name: mariadb_container
build: topcat_mariadb/.
shell:
image: mariadb:10.6.4
command: sleep inf
(The directory topcat_mariadb contains the Dockerfile from your
question.)
If I exec into the shell container:
docker-compose exec shell bash
And run mysql like this:
mysql -h mariadb_container -u user1 -p db1
It works just fine:
root#4fad8e8435df:/# mysql -h mariadb_container -u user1 -p db1
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.6.4-MariaDB-1:10.6.4+maria~focal mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [db1]>
It looks like you may be using sqlalchemy. If I add a Python container
to the mix:
version: "3"
services:
mariadb:
restart: always
image: mariadb_image
container_name: mariadb_container
build: topcat_mariadb/.
shell:
image: mariadb:10.6.4
command: sleep inf
python:
image: python:3.9
command: sleep inf
And then run the following Python code in the python container:
>>> import sqlalchemy
>>> e = sqlalchemy.engine.create_engine('mysql+pymysql://user1:user1pw#mariadb_container:3306/db1')
>>> res = e.execute('select 1')
>>> res.fetchall()
[(1,)]
It also seems to work without a problem.

Related

Why can't I access to my local docker-compose from Browser?

I've run my docker-compose file trying to dockerize pgadmin for Postgres but my browser cannot connect to pgadmin on url localhost:8080.
This is the docker-compose file that I am running
version: '3'
services:
db:
container_name: postgres_container
image: postgres
restart: always
environment:
POSTGRES_DB: postgres_db
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
PGDATA: /var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4:5.5
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: secret
PGADMIN_LISTEN_PORT: 80
ports:
- "8080:80"
volumes:
- pgadmin-data:/var/lib/pgadmin
volumes:
db-data:
pgadmin-data:
This is my docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c6a6a588f639 dpage/pgadmin4:5.5 "/entrypoint.sh" 3 hours ago Up 9 minutes 443/tcp, 0.0.0.0:8080->80/tcp pgadmin4_container
ad6fe3349717 postgres "docker-entrypoint.s" 3 hours ago Up 9 minutes 0.0.0.0:5432->5432/tcp postgres_container
When I try to connect to it from browser to localhost:8080 it says Connection attempt failed
I am using almost the same docker compose file on Windows 10 with WSL 2 and can connect immediately using Firefox build 100 browser to localhost:8080.
My difference is that the image for pgadmin4 in the compose file is image: dpage/pgadmin4
the latest not v5.5
I solved this problem. Turns out that DOCKER_HOST variable was set to 192.168.99.100:2376. You can see it by running command echo $DOCKER_HOST
I just ran my docker container into this port instead of localhost and everything worked fine.
docker run -d -p 192.168.99.100:9411:9411 openzipkin/zipkin
And I was able to access my docker container through the browser by url http://192.168.99.100:9411/ Thank you very much everyone

Docker unable to connect to postgres, but same command works fine when running from containers bash

When building the docker file, I have the command:
CMD ["/app/database/updateLocalDocker.sh"]
The shell script should connect to the postgres service using liquibase but fails with the error connection refused...
When i comment out the above CMD and run the same script directory from the container via docker exec -t -i f42c4bbcd95d /bin/bash, it works fine.
The URL i'm trying to connect to is: jdbc:postgresql://localhost:5432/service_x"
I have a feeling that it's related to either the service not being started or a network issue, when trying to execute the CMD during the docker-compose build stage.
Any guidance would be much appreicated.
docker-compose.yml:
version: "3.8"
services:
db:
image: local.db
build:
context: .
ports:
- 15432:5432
environment:
POSTGRES_PASSWORD: password
networks:
- a
networks:
a:
name: a
external: true
To access your database from your localhost you need to use the port 15432 instead of 5432.
services:
db:
image: local.db
build:
context: .
ports:
- 15432:5432 <--- Here
environment:
POSTGRES_PASSWORD: password
networks:
- a
The first port is your host and the second is the port used in your container.
You can also access it with the container name and the port used in it.
Docker port mapping documentation :
https://docs.docker.com/config/containers/container-networking/
Instead of putting the command in the Dockerfile, you can directly put the command in the docker-compose file and remove CMD ["/app/database/updateLocalDocker.sh"].
docker-compose.yml
version: "3.8"
services:
db:
image: local.db
build:
context: .
command: sh -c "<Enter-your-command>"
ports:
- 15432:5432
environment:
POSTGRES_PASSWORD: password
networks:
- a
networks:
a:
name: a
external: true
If you have one command execute
command: <command>
OR
If you have more than one command, it should be separated by &&.
Syntax:
sh -c "<command-1> && <command-2> && <command-3>"

Wait for a docker container to be ready with command in docker-compose.yml

I have a mysql-db and prisma image in my docker-compose.yml. I want prisma to wait for the db to be ready, cause otherwise prisma keeps restarting and it wont work at all. And I know from here, that I can use ./wait-for-it but I was not able to connect the pieces after searching for a while.
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.25
restart: unless-stopped
ports:
- "4001:4466"
depends_on:
- db
# I added this command
command: ["./wait-for-it.sh", "db:33061", "--"]
environment:
PRISMA_CONFIG: |
managementApiSecret: server.secret.123
port: 4466
databases:
default:
connector: mysql
active: true
host: db
port: 3306
user: ***
password: ***
db:
image: mysql:5.7
restart: unless-stopped
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_USER: ***
MYSQL_ROOT_PASSWORD: ***
ports:
- "33061:3306"
volumes:
- /docker/mysql:/var/lib/mysql
I added the command above but nothing changed, not even an error in the logs but as I understand, the command is run inside the container.
How do I get the ./wait-for-it.sh into the container?
And can this even work this way with the command or does this depend
on the prisma-image?
Otherwise, how would I achieve the waiting?
I just have the docker-compose file and want to do docker-compose up -d
Now I found out how to include wait-for-it.sh into the container.
I downloaded the wait-for-it.sh into the project folder and then I created a file called Dockerfile with the contents:
FROM prismagraphql/prisma:1.25
COPY ./wait-for-it.sh /app/wait-for-it.sh
RUN chmod +x /app/wait-for-it.sh
ENTRYPOINT ["/bin/sh","-c","/app/wait-for-it.sh db:3306 -t 30 -- /app/start.sh"]
In my docker-compose.yml I replaced
image: prismagraphql/prisma:1.25 with build: . which causes a new build from the Dockerfile in my project path.
Now the new image will be built from the prisma image and the wait-for-it.sh will be copied into the new image. Then the ENTRYPOINT is overridden and prisma will wait until the db is ready.
You are confusing internal and external ports. Database is visible on port 3306 inside your network, so you have to wait on db:3306 and not on 33061.
Port exposing has no effect inside user-defined bridge network, created by default by docker-compose. All ports are visible to containers inside network by default. When you expose port, you make it visible outside network.
Also, make sure what is ENTRYPOINT for image prismagraphql/prisma:1.25. If it is not /bin/sh -c or other type of shell, your command wont get executed.
UPD
If you get ENTRYPOINT in base image different from /bin/sh -c, you can override it. Supposing you have /bin/sh -c /app/start.sh, you could do following magic:
docker-compose.yml
...
services:
prisma:
entrypoint: ["/bin/sh", "-c", "'./wait-for-it.sh db:3306 && /app/start.sh'"]

What is the alternative to condition form of depends_on in docker-compose Version 3?

docker-compose 2.1 offers the nice feature to specify a condition with depends_on. The current docker-compose documentation states:
Version 3 no longer supports the condition form of depends_on.
Unfortunately the documentation does not explain, why the condition form was removed and is lacking any specific recommondation on how to implement that behaviour using V3 upwards.
There's been a move away from specifying container dependencies in compose. They're only valid at startup time and don't work when dependent containers are restarted at run time. Instead, each container should include mechanism to retry to reconnect to dependent services when the connection is dropped. Many libraries to connect to databases or REST API services have configurable built-in retries. I'd look into that. It is needed for production code anyway.
From 1.27.0, 2.x and 3.x are merged with COMPOSE_SPEC schema.
version is now optional. So, you can just remove it and specify a condition as before:
services:
web:
build: .
depends_on:
redis:
condition: service_healthy
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
There are some external tools that let you mimic this behaviour. For example, with the dockerize tool you can wrap your CMD or ENTRYPOINT with dockerize -wait and that will prevent running your application until specified services are ready.
If your docker-compose file used to look like this:
version: '2.1'
services:
kafka:
image: spotify/kafka
healthcheck:
test: nc -z localhost 9092
webapp:
image: foo/bar # your image
healthcheck:
test: curl -f http://localhost:8080
tests:
image: bar/foo # your image
command: YOUR_TEST_COMMAND
depends_on:
kafka:
condition: service_healthy
webapp:
condition: service_healthy
then you can use dockerize in your v3 compose file like this:
version: '3.0'
services:
kafka:
image: spotify/kafka
webapp:
image: foo/bar # your image
tests:
image: bar/foo # your image
command: dockerize -wait tcp://kafka:9092 -wait web://webapp:8080 YOUR_TEST_COMMAND
Just thought I'd add my solution for when running postgres and an application via docker-compose where I need the application to wait for the init sql script to complete before starting.
dockerize seems to wait for the db port to be available (port 5432) which is the equivilant of depends_on which can be used in docker 3:
version: '3'
services:
app:
container_name: back-end
depends_on:
- postgres
postgres:
image: postgres:10-alpine
container_name: postgres
ports:
- "5432:5432"
volumes:
- ./docker-init:/docker-entrypoint-initdb.d/
The Problem:
If you have a large init script the app will start before that completes as the depends_on only waits for the db port.
Although I do agree that the solution should be implemented in the application logic, the problem we have is only for when we want to run tests and prepopulate the database with test data so it made more sense to implement a solution outside the code as I tend not like introducing code "to make tests work"
The Solution:
Implement a healthcheck on the postgres container.
For me that meant checking the command of pid 1 is postgres as it will be running a different command on pid 1 while the init db scripts are running
Write a script on the application side which will wait for postgres to become healthy. The script looks like this:
#!/bin/bash
function check {
STATUS=\`curl -s --unix-socket /var/run/docker.sock http:/v1.24/containers/postgres/json | python -c 'import sys, json; print json.load('sys.stdin')["State"]["Health"]["Status"]'\`
if [ "$STATUS" = "healthy" ]; then
return 0
fi
return 1
}
until check; do
echo "Waiting for postgres to be ready"
sleep 5
done
echo "Postgres ready"
Then the docker-compose should mount the directories of the scripts so that we don't edit the Dockerfile for the application and if we're using a custom postgres image, this way we can continue to use the docker files for your published images.
We're also overriding the entry point defined in the docker file of the app so that we can run the wait script before the app starts
version: '3'
services:
app:
container_name: back-end
entrypoint: ["/bin/sh","-c","/opt/app/wait/wait-for-postgres.sh && <YOUR_APP_START_SCRIPT>"]
depends_on:
- postgres
volumes:
- //var/run/docker.sock:/var/run/docker.sock
- ./docker-scripts/wait-for-postgres:/opt/app/wait
postgres:
image: postgres:10-alpine
container_name: postgres
ports:
- "5432:5432"
volumes:
- ./docker-init:/docker-entrypoint-initdb.d/
- ./docker-scripts/postgres-healthcheck:/var/lib
healthcheck:
test: /var/lib/healthcheck.sh
interval: 5s
timeout: 5s
retries: 10
I reached this page because one container would not wait for the one depending upon and I had to run a docker system prune to get it working. There was an orphaned container error that prompted me to run the prune.

Docker healthcheck in composer file

I try to integrate the new healthcheck into my docker system, but I don't really know how to do it in the right way :/
The problem is, my database container needs more time to start up and initialize the database then the container who starts my main application.
As a result: the main container wont start correct, cause of the missing database connection.
I wrote an healthcheck.sh script to check the database container for connectivity, so the main container starts booting after the connectivity is available. But I dont know how to integrate it correctly in the Dockerfile and my docker-compose.yml
healthcheck.sh is like:
#!bin/bash
COUNTER=0
while [[ $COUNTER = 0 ]]; do
mysql --host=HOST --user="user" --password="password" --database="databasename" --execute="SELECT 1";
if [[ $? == 1 ]]; then
sleep 1
echo "Let's sleep again"
else
COUNTER=1
echo "OK, lets go!"
fi
done
mysql container Dockerfile:
FROM repository/mysql-5.6:latest
MAINTAINER Me
... some copies, chmod and so on
VOLUME ["/..."]
EXPOSE 3306
CMD [".../run.sh"]
HEALTHCHECK --interval=1s --timeout=3s CMD ./healthcheck.sh
docker-compose.yml like:
version: '2'
services:
db:
image: db image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
data:
image: data image
main application:
image: application image
restart: always
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
What do I have to do to integrate this healthcheck into my docker-compose.yml file to work?
Or is there any other chance to delay the container startup of my main container?
Thx Markus
I believe this is similar to Docker Compose wait for container X before starting Y
Your db_image needs to support curl.
To do that, create your own db_image as:
FROM base_image:latest
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 3306
Then all you should need is a docker-compose.yml that looks like this:
version: '2'
services:
db:
image: db_image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${MYSQL_INTERNAL_PORT}"]
interval: 30s
timeout: 10s
retries: 5
environment:
TZ: Europe/Berlin
main_application:
image: application_image
restart: always
depends_on:
db:
condition: service_healthy
links:
- db
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
In general your application should be able to cope with unavailable resources, but there are also some cases when starting up where it is pretty convenient to have one container waiting for another to be "fully available". Docker itself doesn't handle that for you, but there are ways to handle the startup in the resource-using container by delaying the actual command with some script.
There is a good example for a postgresql startup check that can be used in any container that needs to wait for the database to be "fully started". Please see the sample code in the docker docs: https://docs.docker.com/compose/startup-order/
Since docker-compose 1.10.0 you can specify healthchecks in your compose file: https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md#healthcheck
It makes use of https://docs.docker.com/engine/reference/builder/#/healthcheck which has been introducded with Docker 1.12

Resources