docker-compose execute volume script as root - docker

I have such docker-compose.yml f.e.:
version: '3'
services:
db:
#build: db
image: percona:5.7.24-centos
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: bc
MYSQL_PASSWORD: pass
volumes:
- ./db:/docker-entrypoint-initdb.d
and the script is f.e.:
mkdir /home/workdirectory/
There is no sudo in that image.
Default user is mysql.
Initial place is just /.
So how can I execute scripts inside ./db as a root on that image?

You can inherit your own docker image from percona:5.7.24-centos and switch user or install sudo. Or just create necessary directories in Dockerfile.

I'd suggest you use the following assuming your script is a bash script and it's placed inside the db folder:
script.sh
#!/bin/bash
mkdir -p /home/workdirectory/some-sub-folder/
then make sure your container is up and running by executing docker-compose up -d
then use the following command to execute some script from inside the container:
docker exec db /docker-entrypoint-initdb.d/script.sh
Link:
https://docs.docker.com/engine/reference/commandline/exec/

Related

Not able to mount Docker Volume for PhpMyAdmin

I've install MySQL and PhpMyAdmin on docker
MySQL volume mount works perfectly fine,
But I also want container's /var/www/html/libraries, /var/www/html/themes folders to be saved/persisted to my host.
So that If I change any file and it stays like that..
This is my docker-compose.yml
version: '3.5'
services:
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- ./var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
phpmyadmin:
container_name: phpmyadmin
image: phpmyadmin/phpmyadmin:latest
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries # Here's the problem
- ./phpmyadmin/themes:/var/www/html/themes # Here's the problem
environment:
PMA_HOST: mysql
The current problem is,
it does create the folders /phpmyadmin/libraries, /phpmyadmin/themes
But inside they're empty and the container's directories (/var/www/html/libraries, /var/www/html/themes) also becomes empty.
I'm very new to Docker, and currently I've no clue :(
Many Thanks in advance.
Your problem is that /var/www/html is populated at build time and volumes are mounted at run time which causes /var/www/html to be overwritten by what you have locally (i.e. nothing).
You need to extend the Dockerfile for PHPMyAdmin to delay populating those directories until after the volumes have been mounted. You'll need something like this setup:
Modify docker-compose.yml to the following:
...
phpmyadmin:
container_name: phpmyadmin
build:
# Use the Dockerfile located at ./build/phpmyadmin/Dockerfile to build this image
context: ./build/phpmyadmin
dockerfile: Dockerfile
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries
- ./phpmyadmin/themes:/var/www/html/themes
environment:
PMA_HOST: mysql
Create a file at ./build/phpmyadmin/Dockerfile with this content:
FROM phpmyadmin/phpmyadmin:latest
# Move the directories you want into a temporary directory
RUN mv /var/www/html /tmp/
# Modify the start up of this image to use a custom script
COPY ./custom-entrypoint.sh /custom-entrypoint.sh
RUN chmod +x /custom-entrypoint.sh
ENTRYPOINT ["/custom-entrypoint.sh"]
CMD ["apache2-foreground"]
Create a custom entrypoint at ./build/phpmyadmin/custom-entrypoint.sh with this content:
#!/bin/sh
# Copy over the saved files
cp -r /tmp/html /var/www
# Kick off the original entrypoint
exec /docker-entrypoint.sh "$#"
Then you can build and start everything with docker-compose up --build.
Note: this will probably cause issues for you if you're trying to version control these directories - you'll probably need to modify custom-entrypoint.sh.

Docker persisted volum has no permissions (Apache Solr)

My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).

docker-compose run commands after up

I have the following docker-compose file
version: '3.2'
services:
nd-db:
image: postgres:9.6
ports:
- 5432:5432
volumes:
- type: volume
source: nd-data
target: /var/lib/postgresql/data
- type: volume
source: nd-sql
target: /sql
environment:
- POSTGRES_USER="admin"
nd-app:
image: node-docker
ports:
- 3000:3000
volumes:
- type: volume
source: ndapp-src
target: /src/app
- type: volume
source: ndapp-public
target: /src/public
links:
- nd-db
volumes:
nd-data:
nd-sql:
ndapp-src:
ndapp-public:
nd-app contains a migrations.sql and seeds.sql file. I want to run them once the container is up.
If I ran the commands manually they would look like this
docker exec nd-db psql admin admin -f /sql/migrations.sql
docker exec nd-db psql admin admin -f /sql/seeds.sql
When you run up with this docker-compose file, it will run the container entrypoint command for both the nd-db and nd-app containers as part of starting them up. In the case of nd-db, this does some prep work then starts the postgres database.
The entrypoint command is defined in the Dockerfile, and expects to combine configured bits of ENTRYPOINT and CMD. What you might do is override the ENTRYPOINT in a custom Dockerfile or overriding it in your docker-compose.yml.
Looking at the postgres:9.6 Dockerfile, it has the following two lines:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
You could add the following to your nd-db configuration in docker-compose.yml to retain the existing entrypoint but also "daisy-chain" a custom migration-script.sh step.
entrypoint: ["docker-entrypoint.sh", "migration-script.sh"]
The custom script needs only one special behavior: it needs to do a passthru execution of any following arguments, so the container continues on to start postgres:
#!/usr/bin/env bash
set -exo pipefail
psql admin admin -f /sql/migrations.sql
psql admin admin -f /sql/seeds.sql
exec "$#"
Does docker-composer -f path/to/config.yml name_of_container nd-db psql admin admin -f /sql/migrations.sql work?
I’ve found that you have to specify the config and container when running commands from the laptop.

Is there a way to RUN a command after building one of two containers in docker-compose

Following case:
I want to build with docker-compose two containers. One is MySQL, the other is a .war File executed with springboot that is dependend on MySQL and needs a working db. After I build the mysql container, I want to fill the db with my mysqldump file, before the other container is built.
My first idea was to have it in my mysql Dockerfile as
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
but of course it wants to execute it while building.
I have no idea how to do it in the docker-compose file as Command, maybe that would work. Or do I need to build a script?
docker-compose.yml
version: "3"
services:
mysqldb:
networks:
- appsb-mysql
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=appsb
build: ./mysql
app-sb:
image: openjdk:8-jdk-alpine
build: ./app-sb/
ports:
- "8080:8080"
networks:
- appsb-mysql
depends_on:
- mysqldb
networks:
- appsb-mysql:
Dockerfile for mysqldb:
FROM mysql:5.7
COPY target/appsb.sql /
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
Dockerfile for the other springboot appsb:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/appsb.war /
RUN java -jar /appsb.war
Here is a similar issue (loading a dump.sql at start up) for a MySQL container: Setting up MySQL and importing dump within Dockerfile.
Option 1: import via a command in Dockerfile.
Option 2: exec. a bash script from docker-compose.yml
Option 3: exec. an import command from docker-compose.yml

Access db as localhost from docker container

I'm pretty new to Docker but am trying to use it to clean up some of my projects. One such project is a fairly simple PHP/MySQL application. I've "docker-ized" the app by adding a docker-compose.yml with db and php services. Here's my docker-compose.yml:
version: '2'
services:
php:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./public_html:/var/www/html
links:
- db
db:
image: mysql:5.5
ports:
- "3306:3306"
environment:
MYSQL_USER: root
MYSQL_PASSWORD:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- /c/dockerdata:/var/lib/mysql
This works correctly however I have to change all my PHP scripts to use "db" instead of "localhost" when connecting to the mysql database. I'm adding the docker stuff just as a way to clean up development so I'm trying to avoid changing the PHP code itself. Is there a way I can configure this so I'm able to use localhost or 127.0.0.1 to connect?
Docker doesn't allow you to modify /etc/hosts on containers Known issue
You can edit /etc/hosts with entrypoint option
Create entrypoint.sh script
#!/bin/bash
cp /etc/hosts /tmp/hosts
sed -e '/localhost/ s/^#*/#/' -i /tmp/hosts
cp /tmp/hosts /etc/hosts
# add your command here to run php application
Add execute permissions to entrypoint.sh
chmod +x entrypoint.sh
Add below two lines to Dockerfile
ADD entrypoint.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
Now do the step 2) from my previous answer.
You can achieve this using below two steps
1) Add below CMD to your Dockerfile
CMD sed -e '/localhost/ s/^#*/#/' -i /etc/hosts
2) Replace 'db' with 'localhost' in docker-compose.yml
links:
- db
db:
image: mysql:5.5
as
links:
- localhost
localhost:
image: mysql:5.5

Resources