docker, MYSQL_ROOT_PASSWORD do not work - docker

docker-compose:
mysql:
image: mysql:5.7.16
container_name: f_mysql
volumes:
- ./db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: sheep
expose:
- '3306'
and I use docker exec input this container,
and I type echo $MYSQL_ROOT_PASSWORD, then I got sheep,
but the mysql root password still is '',
when I type 'mysql -uroot', I login mysql.

For me the issue was that I'd created the db volume with the random password option set, then disabled that, but hadn't cleared the volume. So no matter what changes I made to the docker-compose file, the old volume with the old login information was still there.
I had to docker volume ls to find the volume then docker volume rm <name> to remove it. After re-upping, everything worked.
Regarding other answers on this page, the format for specifying env variables is correct, you can use either
environment:
MYSQL_ROOT_PASSWORD: a_password
OR
environment:
- MYSQL_ROOT_PASSWORD=a_password

The image entrypoint script will never make changes to a database which is existing. If you mount an existing data directory into var/lib/mysql then MYSQL_ROOT_PASSWORD will have no effect.
Workaround
Remove all unused volumes: docker volume prune
Remove the volume from your database service: docker volume rm <db_data>
Down containers, remove volumes: docker-compose down --volumes

You need to fix your docker-compose file:
environment:
- MYSQL_ROOT_PASSWORD=sheep
The following is the full docker-compose that achieves what you want:
version: '2'
services:
mysql:
image: mysql:5.7.16
container_name: f_mysql
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=sheep
expose:
- '3306'
Then with a docker exec -it f_mysql /bin/bash and inside the container mysql -u root -p, using sheep, as the password, will be the only way to connect to the mysql server.

This happens when your volume from a directory has wrong permission.
You can fix this letting docker to create directory itself.
In case you have an existent data, you can compare the new one with the previous one in order to apply correct chmod, because this depends on if docker/your-user is part of root group.

Please note that according to the official docker image: "none of those variables will have any effect if you start the container with a data directory that already contains a database". In fact, in this case, you have already a "mysql.user" table, and you should use the user info set there that there. The same thing happens when you try to restore a full dump.

This happened when the the mount directory has ea(extended attribute) on Mac.
It is better to delete the directory once and recreate it or check the permission with the xattr command.
$ ls -l ./db
$ xattr ls ./db

I had a same problem, and after a lot of try I found my solution. When I run first time the docker-composer, I left everything on the original settings like this:
environment:
MYSQL_ROOT_PASSWORD: MYSQL_ROOT_PASSWORD
Then I change the password, say "docker-compose up" but it was still MYSQL_ROOT_PASSWORD.
My solution was delete the "mysql" docker image from my disk. After that, the docker download everything again BUT also set my password for the root as well. Maybe this is not the best, but I am also a beginner in Docker.
So in nutshell the simple "docker-compose up" does not enough.

Related

Sharing data between docker containers without making data persistent

Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).

How to move docker volume to disk location?

I have a MySQL docker image running in a docker container on Ubuntu VPS. I bring up MySQL using the docker-compose up -d command via the following docker-composer.yml file
version: "3"
services:
mysql_server:
image: mysql:8.0.21
restart: always
container_name: mysql_server
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: root_password
volumes:
- mysql_server_data:/var/lib/mysql
- /mysql/files/conf.d:/etc/mysql/conf.d
I am having some performance issues and would like to do the following in attempt to improve performance.
I want the data in the mysql_server_data volume to be mounted on /mysql/data without losing any data as this instance in running in production.
I also want to mount the MySQL config file on /mysql/files so I can change the instance configuration to increase performance.
Questions
How can change the data location of a the volume from mysql_server_data to /mysql/data?
Also, how can I mount MySQL's config file on /mysql/files/conf.d to allow me to update the settings?
I tried to mount config file like this
volumes:
- /mysql/files/conf.d:/etc/mysql/conf.d
But that created a directory /mysql/files/conf.d with no config file.
To move the data:
Shutdown the container with docker-compose down, then on the local file system, copy the data from mysql_server_data to /mysql/data. Then change the compose file to reflect the new location. Finally restart the container with docker-compose up.
To mount the config files, as per the docker hub documentation for MySQL, If /my/custom/config-file.cnf is the path and name of your custom configuration file, the your volume map is:
/my/custom:/etc/mysql/conf.d
Note that mapping the volume to the container does not bring the data from your container to you local, but the other way around. So if you want to have the file in the container, you must first create it on your local.
Use the trick suggested by Docker maintainer Sebastiaan van Stijn at https://github.com/moby/moby/issues/31417 to send the tar over stdout:
docker run --rm -v vol_name:/vol_path img_name sh -c 'tar -cOzf - /vol_path' > volume-export.tgz

docker container does not work after restart

I have a VM and inside it, I am running an Elasticsearch Docker container built through docker-compose. It was working pretty well. Then after the power suddenly went out, I tried running the container back again but discovered an error that wasn't present before:
Then the container kept on restarting. And when I checked the file permissions (within the small window of time before the container restarts), I found this:
Here's my docker-compose.yml:
version: '2.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
hostname: elasticsearch
restart: always
user: root
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
env_file:
- devopsfw-elk.env
What is actually happening here? I'm fairly new to Docker and Elasticsearch and I'm very confused as to the errors that are occuring.
The problem is that the file has been corrupted, delete it and restart the container.
rm -i ./*elasticsearch.yml*
If you have problems to delete this, read this:
https://superuser.com/questions/197605/delete-a-corrupt-file-in-linux
Looks like the file was owned by the root user and has been corrupted, in order to delete the file, you have to use the super user access aka sudo , so correct command would be
sudo rm -i ./*elasticsearch.yml*
And after that, create a file and restart the conatainer.

File in docker-entrypoint-initdb.d never get executed when using docker compose

I'm using Docker Toolbox on Windows 10
I can access the php part succesfully via http://192.168.99.100:8000, I have been working around with the mariadb part but still having several problems
I have an sql file as /mariadb/initdb/abc.sql so I should be copied into /docker-entrypoint-initdb.d, after the container is created I use docker-compose exec mariadb to access the container, there is the file as /docker-entrypoint-initdb.d/abc.sql but the file never get executed, I also have tested to import the sql file to the container manually, it was succesful so the sql file is valid
I don't quite understand about the data folder mapping, and what to do to get the folder sync with the container, I always get the warning when recreate the container using docker-compose up -d
WARNING: Service "mariadb" is using volume "/var/lib/mysql" from the previous container. Host mapping "/.../mariadb/data" has no effect. Remove the existing containers (with docker-compose rm mariadb) to use the Recreating db ... done
Questions
How to get the sql file in /docker-entrypoint-initdb.d to be executed ?
What is the right way to map the data folder with the mariadb container ?
Please guide
Thanks
This is my docker-compose.yml
version: "3.2"
services:
php:
image: php:7.1-apache
container_name: web
restart: always
volumes:
- /.../php:/var/www/html
ports:
- "8000:80"
mariadb:
image: mariadb:latest
container_name: db
restart: always
environment:
- MYSQL_ROOT_PASSWORD=12345
volumes:
- /.../mariadb/initdb:/docker-entrypoint-initdb.d
- /.../mariadb/data:/var/lib/mysql
ports:
- "3306:3306"
For me the issue was the fact that Docker didn't clean up my mounted volumes from previous runs.
Doing a:
docker volume ls
Will list any volumes, and if previous exist, then run 'rm' command on the volume to remove it.
As stated on docker mysql docks, scripts in the '/docker-entrypoint-initdb.d' folder is only evalutated the first time the container runs, and if a previous volume remains, it won't run the scripts.
As for the mapping, you simply need to mount your script folder to the '/docker-entrypoint-initdb.d' folder in the image:
volumes:
- ./db/:/docker-entrypoint-initdb.d
I have a single script file in a folder named db, relative to my docker-compose file.
In your Docker file for creating mariaDB, at the end add the abc.sql file to your docker entry point like so:
COPY abc.sql /docker-entrypoint-initdb.d/
Remove the - /.../mariadb/initdb:/docker-entrypoint-initdb.d mapping as any file copied into the entry point will be executed.
Note: Windows containers do not execute anything in docker-entrypoint-initdb.d/

Docker - container started by docker-compose changing file ownership to root

I am starting six or seven containers via a docker-compose file. One container is causing a major problem! Here is the relevant section:
services:
...
main-app:
image: mycompany/sys:1.2.3
container_name: "main-app-container"
ports:
- "8080:8080"
- "8009"
volumes:
- db_data:/var/lib/home/data:rw
- /opt/mycompany/sys/config:/opt/mycompany/sys/config:rw
networks:
- systeminternal
hostname: "mylocalhost.company.com"
volumes:
db_data:
driver: local
networks:
systeminternal:
When the main-app-container is started via docker-compose up (as the root user) the file system privileges in many of the directories in the committed container are all changed to root! This is running on Ubuntu 14.04, Docker 1.12.x (not sure which x).
We have another system where we run everything as a local user. When we exec a shell into that container, all the file privileges are of our local user that was ownership as it was committed. From googling, I am pretty sure it has something to do with the volumes, but could not find anything definitive. Any help is welcome!
This is the expected behavior for host-mounts, that said, everything inside /opt/mycompany/sys/config will be having the same UID/GID the files have on the host - that is by design.
Either change the files to the uid/gid you need on the host: chown -R 123:321 /opt/mycompany/sys/config or setup your container to be happy to use the uid/gid of the host.
It has nothing to do with docker-compose, it would happen the same way when you use
docker run -v /opt/mycompany/sys/config:/opt/mycompany/sys/config mycompany/sys:1.2.3

Resources