Create a directory in docker-compose.yml file - docker

I have a docker-compose yml that creates a sftp image on my docker. I'd like to write a script in the yml file as I want directories to be created automatically as soon as I run the docker-compose.yml.
Here's my yml file;
sftp:
image: atmoz/sftp
volumes:
- C:\tmp\sftp:/home/foo/upload
ports:
- "2222:22"
command: username:password:1001
Is there a way to write mkdir and chmod in this file?

You do not need to create and set mod manually, just pass the directory name to CMD and the entrypoint will create one for you. Here is the simplest example.
Define users in (1) command arguments, (2) SFTP_USERS environment
variable or (3) in file mounted as /etc/sftp/users.conf (syntax:
user:pass[:e][:uid[:gid[:dir1[,dir2]...]]] ..., see below for
examples)
using docker-compose
sftp:
image: atmoz/sftp
command: username:password:100:100:upload
it will create user name username and directory upload under /home/username
You can verify this using
docker exec -it --user username <container_id> bash -c "ls /home/username"
if you want to access upload files from host just add mounting in your docker-compose
sftp:
image: atmoz/sftp
command: username:password:100:100:upload
volumes:
- /host/upload:/home/username/upload
Examples
Simplest docker run example
docker run -p 22:22 -d atmoz/sftp foo:pass:::upload
User "foo" with password "pass" can login with sftp and upload files
to a folder called "upload". No mounted directories or custom UID/GID.
Later you can inspect the files and use --volumes-from to mount them
somewhere else (or see next example).
see the offical documentation

Related

Use credentials inside docker container

I'm building a docker image from a project where I have a file with default credentials for the database. At the docker container run time, I want to pass the real credentials and replace the variables defined on that file. What is the best way to do it? I tried to use environment variables, but it's not working.
db_config.yml:
host: ${HOST}
user: ${USER}
pass: ${PASS}
port: ${PORT}
db: ${DB_NAME}
docker-compose.yml:
version: '2.3'
services:
test_ctr:
container_name: test
image: container:latest
network_mode: "host"
environment:
- HOST=${HOST}
- USER=${USER}
- PASS=${PASS}
- PORT=${PORT}
- DB_NAME=${DB_NAME}
db_config.yml is in builded image and language is Python. Basically when I run container, db_config.yml is red by a script and use file's credentials. When I create the image, this db_config.yml have default credentials. but when I run the container, I want to replace this file
To debug this try running:
docker exec -it <name-of-the-container> <command>
In your case this translates to:
docker exec -it test sh
This should open a shell inside the container.
Then type:
printenv
This will print all Environment variables and their values (that way You will see if the values You have passed are present)
There will be a problem if the container is crashing at startup (in this case it's not possible to use docker exec).
TIP:
Use .env file located in the same directory as docker-compose.yml (or whatever your docker-compose file is) to pass variables.
.env:
KEY1=value1
KEY2=value2
In your case this might look something like:
HOST=1.2.3.4
USER=sa
PASSWORD=42
PORT=4242
DB_NAME=mydb
When your running:
docker-compose up
docker-compose will look for this .env file and will inject the values from this file
Good luck

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

How to add files in docker container and make them accessible from other containers?

Short version:
I want to add files in a docker container in docker-compose or Dockerfile and I want to make it accessible from other containers that I made in docker-compose file. How can I do that?
Long version:
I have a Python app in a container that uses a .csv file to generate a POJO machine learning model.
I also have a Java app in a container that uses the POJO machine learning model and appends the .csv file. The java app has a fileWatcher() method implemented.
The containers are made from the docker-compose file that calls Dockerfiles for each one of them. So I want to add them this way and not with CMD docker commands.
You can add the same named volume to different containers:
docker volume create --name volume_data
docker run -t -i -v volume_data:/public debian:jessie /bin/bash
docker run -t -i -v volume_data:/public2 debian:jessie /bin/bash
or as docker-compose.yml
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets

Docker-compose set user and group on mounted volume

I'm trying to mount a volume in docker-compose to apache image. The problem is, that apache in my docker is run under www-data:www-data but the mounted directory is created under root:root. How can I specify the user of the mounted directory?
I tried to run command setupApacheRights.sh. chown -R www-data:www-data /var/www but it says chown: changing ownership of '/var/www/somefile': Permission denied
services:
httpd:
image: apache-image
ports:
- "80:80"
volumes:
- "./:/var/www/app"
links:
- redis
command: /setupApacheRights.sh
I would prefer to be able to specify the user under which it will be mounted. Is there a way?
To achieve the desired behavior without changing owner / permissions on the host system do the following steps.
get the ID of the desired user and or group you want the permissions to match with executing the id command on your host system - this will show you the uid and gid of your current user and as well all IDs from all groups the user is in.
$ id
add the definition to your docker-compose.yml
user: "${UID}:${GID}"
so your file could look like this
php: # this is my service name
user: "${UID}:${GID}" # we added this line to get a specific user / group id
image: php:7.3-fpm-alpine # this is my image
# and so on
set the values in your .env file
UID=1000
GID=1001
3a. Alternatively you can extend your ~/.bashrc file with:
export UID GID
to define it globally rather than defining it in a .env file for each project.
If this does not work for you (like on my current distro, the GID is not set by this, use following two lines:
export UID=$(id -u)
export GID=$(id -g)
Thanks #SteenSchütt for the easy solution for defining the UID / GID globally.
Now your user in the container has the id 1000 and the group is 1001 and you can set that differently for every environment.
Note: Please replace the IDs I used with the user / group IDs you found on your host system. Since I cannot know which IDs your system is using I gave some example group and user IDs.
If you don't use docker-compose or want to know more different approaches to achieve this have a read through my source of information: https://dev.to/acro5piano/specifying-user-and-group-in-docker-i2e
If the volume mount folder does not exist on your machine, docker will create it (with root user), so please ensure that it already exists and is owned by the userid / groupid you want to use.
I add an example for a dokuwiki container to explain it better:
version: '3.5'
services:
dokuwiki:
user: "${UID}" # set a specific user id so the container can write in the data dir
image: bitnami/dokuwiki:latest
ports:
- '8080:8080'
volumes:
- '/home/manuel/docker/dokuwiki/data:/bitnami/dokuwiki/'
restart: unless-stopped
expose:
- "8080"
The dokuwiki container will only be able to initialize correctly if it has write access to the host directory /home/manuel/docker/dokuwiki/data.
If on startup this directory does not exist, docker will create it for us but it will have root:root as user & group. --> Therefor the container startup will fail.
If we create the folder before starting the container
mkdir -P /home/manuel/docker/dokuwiki/data
and then check with
ls -nla /home/manuel/docker/dokuwiki/data| grep ' \.$'
which uid and gid the folder has - and check that they match the ones we put in our .env file in step 3. above.
The bad news is there's no owner/group/permission settings for volume 😢. The good news is the following trick will let you bake it into your config, so it's fully automated 🎉.
In your Dockerfile, create an empty directory in the right location and with desired settings.
This way, the directory will already be present when docker-compose mounts to the location. When the server mounts during boot (based on docker-compose), the mounting action happily leaves those permissions alone.
Dockerfile:
# setup folder before switching to user
RUN mkdir /volume_data
RUN chown postgres:postgres /volume_data
USER postgres
docker-compose.yml
volumes:
- /home/me/postgres_data:/volume_data
source
First determine the uid of the www-data user:
$ docker exec DOCKER_CONTAINER_ID id
uid=100(www-data) gid=101(www-data) groups=101(www-data)
Then, on your docker host, change the owner of the mounted directory using the uid (100 in this example):
chown -R 100 ./
Dynamic Extension
If you are using docker-compose you may as well go for it like this:
$ docker-compose exec SERVICE_NAME id
uid=100(www-data) gid=101(www-data) groups=101(www-data)
$ chown -R 100 ./
You can put that in a one-liner:
$ chown -R $(docker-compose exec SERVICE_NAME id -u) ./
The -u flag will only print the uid to stdout.
Edit: fixed casing error of CLI flag. Thanks #jcalfee314!
Adding rw to the end of the volume mount worked for me:
services:
httpd:
image: apache-image
ports:
- "80:80"
volumes:
- "./:/var/www/app:rw"
links:
- redis
command: /setupApacheRights.sh
Set user www-data for this compose service
user: "www-data:www-data"
Example:
wordpress:
depends_on:
- db
image: wordpress:5.5.3-fpm-alpine
user: "www-data:www-data"
container_name: wordpress
restart: unless-stopped
env_file:
- .env
volumes:
- ./wordpress/wp-content:/var/www/html/wp-content
- ./wordpress/wp-config-local.php:/var/www/html/wp-config.php
If your volumes create ownership issue then you might need to find your volume mount path by
cmd: docker volume ls
After that identify your volume name then inspect your mount path
cmd: docker volume inspect <volume name>
check your mount point there and go on mount point on your docker host machine.
where check ownership of volume by
cmd: ls -l
if it's suggest root:root then change owneship here to your docker user.
cmd: chown docker_user_id:docker_group_id -R volume_path
Note: you can find your docker user id & user group id by entering into your docker bash & hit "id" cmd.
cmd: docker-compose run --rm <container_name> bash
cmd: id
output: uid=102(www-data) gid=102(www-data) groups=102(www-data)
Find similar thread here. https://www.hamaraweb.com/sms/407/docker-volume-ownership-issue-errno-13-permission-denied-bgb6ld/

Adding files to standard images using docker-compose

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql

Resources