Need mkdir permission for EC2 docker container - docker

I have a docker-compose which runs 3 containers:
selenium/hub
selenium/node-chrome
My own image of a java program that uses the 2 above containers to log into a website, navigate to a download page, click on a check-box, then click on a submit button, that will cause a file to be downloaded.
Everything runs fine on my pc, but on an EC2 instance the chrome node gets the error:
mkdir: cannot create directory '/home/selsuser'
and then other errors trying to create sub-directories.
How can I give a container mkdir permissions?
I would like to run this as an ECS-Fargate task, so I would also need to give a container mkdir permissions within that task.
Thanks

Well,
Thank you for the details. It seems indeed you need rights you do not have. What you can try is to create a user group and share it accross your container.
To do so,
Create a groupe user with a GID that does not already exists (enter id on your terminal to see all the existing GID). We will assume 500 is not already used:
chown :500 Downloads
Then, give the appropriate rights to your new group and make all the subfolders having the right of your created group:
chmod 665 Downloads && chmod g+s Downloads
(If you want to be at ease you can always give full permission, up to you)
Then share the rights with a group created in the container thanks to a Dockerfile (replace <username> and <group_name> by whatever you want:
FROM selenium/node-chrome:3.141.59
RUN addgroup --gid 500 <group_name> &&\
adduser --disabled-password --gecos "" --force-badname --ingroup 500 <username>
USER <username>
Then of course don't forget to edit your docker-compose file:
selenium:
build:
context: <path_to_your_dockerfile>
Hoping it will work :)
(From the author of question)
I do have volume mapping, but I do not think there is any connection there to the problem I have. The problem is the selenium/node-chrome container wants to create the directory. On my pc, there are no problems, on EC2 it causes an error that it cannot create the directory. I assume on EC2 you need root privs to do anything on /home.
Here is the complete docker-compose file:
version: "3"
services:
hub:
image: selenium/hub:3.141.59
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59
shm_size: '1gb'
depends_on:
- hub
environment:
- HUB_HOST=hub
volumes:
- ./Downloads:/home/seluser/Downloads
migros-module:
image: freiburgbill/migros-docker
depends_on:
- chrome
environment:
- HUB_HOST=hub
- BROWSER=chrome
volumes:
- ./migros-output:/usr/share/udemy/test-output
- ./Downloads:/usr/share/udemy/Downloads

Thanks again to Paul Barrie for your input and help to get me looking closer at permissions.
For running the docker-compose file that worked on my pc, but did not work on an EC2 instance, I created a /tmp/download directory and gave it full rights (sudo chmod -R 2775 /tmp/Downloads), then it ran without any problems!
For trying to do the same thing as an ECS-Fargate Task. I created an EFS, attached the EFS to an EC2 instance so I could go into it and set the permissions on the whole EFS (sudo chmod -R 777 /mnt/efs/fs1, where that is the default path connecting the EFS to the EC2). I then created ECS-Fargate Task attaching the EFS as a volume. Then everything worked!
So in summery, the host where the docker-compose is running has to have permissions for writing the file. With Fargate we cannot access the host, so an EFS has to be given permissions for writing the file.
I know there must be a better way of locking down the security to just what is needed, but the open permissions does work.
It would of been good if I could of changed the permissions of the Fargate temporary storage and used the bind mount, but I could not find a way to do that.

Related

docker-compose: volume problem: path on host created but not populated by container

I have the following docker-compose:
version: '3.7'
services:
db:
image: bitnami/mongodb:5.0.6
volumes:
- "/app/local-data:/data/db"
env_file: ./db/.env
The problem is data does not persist between docker-compose up/down and docker does not seem to use /app/local-data even though it creates it.
When I run docker-compose, container starts and works naturally. The directory /app/local-data is created by docker, however Mongodb does not populate it, and no r/w error is being shown on console. This makes me thing a temporary volume is assigned to container instead.. But if that is true then why docker still creates /app/local-data and not using it?
Any ideas how can I debug this?
Docker directives like volumes: don't know anything about what's actually running in the image. That directive creates the specified host and container paths if required, and bind-mounts the host path into the container path. It's up to the application code to use that directory (or not).
If you look at the bitnami/mongodb Docker Hub page under "Persisting your database", the database is configured to store data in the /bitnami/mongodb directory inside the container, and that directory needs to be the second volumes: path. Also note the requirement that the data directory needs to be writable by user ID 1001, which may or may not exist on your host (there's no specific requirement to create it).
volumes:
- "/app/local-data:/bitnami/mongodb"
# ^^^^^^^^^^^^^^^^
sudo chown -R 1001 /app/local-data
sudo docker-compose up -d

docker build issue with docker file: Jobber not working after adding a package

I'm still learning all this stuff around docker. Now I have an issue that I don't understand. Maybe one of you can explain to me, what I did wrong.
I want to schedule some SQL scripts with jobber. Therefore I need to add the MYSQL-Client package into a jobber image.
Docker file:
FROM jobber:latest
User root
COPY install-packages.sh .
RUN chmod +x ./install-packages.sh
RUN ./install-packages.sh
install-packages.sh
apk update
apk upgrade
apk add mysql-client
rm -rf /var/cache/apk/*
My docker build command:
docker build . -t jobbermysql:20210110
Docker-Compose file to run the container:
version: '3'
services:
jobbermysql:
image: jobbermysql:20210110
container_name: jobbermysqlcompose
restart: always
volumes:
- /home/docker/datapath/jobber/jobberuser:/home/jobberuser
The docker build works fine. but when I run an instance of my image jobbermysql:20210110 jobber always reports:
jobbermysqlcompose | User root doesn't own jobfile
If I try to get some additional information / jobs via direct access to the running container (e.g. a jobber init command to understand the issues)
/home/jobberuser # jobber init
Jobber doesn't seem to be running for user root.
(No socket at /var/jobber/0/cmd.sock.): stat /var/jobber/0/cmd.sock: no such file or directory
If I restart the “old” default jobber version (without my modification of mysql-client) it’s working fine. And they both use the same volume mapping. So I think I have destroyed something in the docker build process.
version: '3'
services:
jobbermysql:
image: jobber:latest
container_name: jobbermysqlcompose
restart: always
volumes:
- /home/docker/datapath/jobber/jobberuser:/home/jobberuser
Can somebody give me an hint?
Many Thanks and Kind regads
Holger
Jobber itself seems to be quite specific about the required file permissions of the .jobber file. Jobber's documentation states the following:
Jobfiles must be owned by the user who owns the home directory that
contains them, and their permissions must not allow the owning group
or any other users to write to them (i.e., the permission bits must
exclude 022)
Therefore, we need to set the file permissions of the mounted file accordingly. As the official docker is running as USER jobberuser, we need to set the file permissions accordingly:
chown 1000:1000 jobber-jobs.yml
chmod 600 jobber-jobs.yml
In your case, you switched to USER root, but did not switch back after installing the packages. The following Dockerfile & docker-compose.yml did work for me:
FROM jobber
USER root
RUN apk add --no-cache mysql-client
USER jobberuser
version: '3'
services:
cron:
image: mysql-jobber
build: ./build
volumes:
- ./jobber-jobs.yml:/home/jobberuser/.jobber:ro

docker volume permissions in neo4j

I'm having a bit of bother with the following docker compose setup:
version: '2.4'
services:
graph_dev:
image: neo4j:3.5.14-enterprise
container_name: amlgraph_dev
user: 100:101
ports:
- 7474:7474
- 7473:7473
- 7687:7687
volumes:
- neo4jbackup:/backup
- neo4jdata:/data
volumes:
neo4jbackup:
neo4jdata:
I would like to run the neo4j-admin command, which must be run as the user 100 (_apt). However, the volume I need to backup to neo4jbackup, is mounted as root and _apt can't write there.
How do I create a volume that _apt can write to? The user _apt:neo4j obviously does not exist on the host. There are no users for which I have root on the docker image.
I can think of two options,
run neo4j docker container as a valid LINUX user and group and give that user access to a backup folder. Here is what my script looks like (I don't use compose currently) to run neo4j in docker under the current user
docker run
--user "$(id -u):$(id -g)"
Here is an article that covers doing the same thing with compose
https://medium.com/faun/set-current-host-user-for-docker-container-4e521cef9ffc
(hacky?) but you could run neo4j-admin outside docker, or in another container in a process that does have access to the backup volume? (I hear you want to run it as root?)
but of course I'm wondering why the backup process or db backup would be owned by root (as opposed to owned by a db owner or backup account...) Personally I feel it is best practice to avoid using root account, whenever possible.
I ended up solving this problem by running the command as _apt as required (docker-compose run graph_dev) and the using docker exec -it -u neo4j:neo4j graph_dev /bin/bash to copy the file over to the backup directory. Not elegant but works.

Is there a better way to avoid folder permission issues for docker containers launched from docker compose in manjaro?

Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.

Docker-compose set user and group on mounted volume

I'm trying to mount a volume in docker-compose to apache image. The problem is, that apache in my docker is run under www-data:www-data but the mounted directory is created under root:root. How can I specify the user of the mounted directory?
I tried to run command setupApacheRights.sh. chown -R www-data:www-data /var/www but it says chown: changing ownership of '/var/www/somefile': Permission denied
services:
httpd:
image: apache-image
ports:
- "80:80"
volumes:
- "./:/var/www/app"
links:
- redis
command: /setupApacheRights.sh
I would prefer to be able to specify the user under which it will be mounted. Is there a way?
To achieve the desired behavior without changing owner / permissions on the host system do the following steps.
get the ID of the desired user and or group you want the permissions to match with executing the id command on your host system - this will show you the uid and gid of your current user and as well all IDs from all groups the user is in.
$ id
add the definition to your docker-compose.yml
user: "${UID}:${GID}"
so your file could look like this
php: # this is my service name
user: "${UID}:${GID}" # we added this line to get a specific user / group id
image: php:7.3-fpm-alpine # this is my image
# and so on
set the values in your .env file
UID=1000
GID=1001
3a. Alternatively you can extend your ~/.bashrc file with:
export UID GID
to define it globally rather than defining it in a .env file for each project.
If this does not work for you (like on my current distro, the GID is not set by this, use following two lines:
export UID=$(id -u)
export GID=$(id -g)
Thanks #SteenSchütt for the easy solution for defining the UID / GID globally.
Now your user in the container has the id 1000 and the group is 1001 and you can set that differently for every environment.
Note: Please replace the IDs I used with the user / group IDs you found on your host system. Since I cannot know which IDs your system is using I gave some example group and user IDs.
If you don't use docker-compose or want to know more different approaches to achieve this have a read through my source of information: https://dev.to/acro5piano/specifying-user-and-group-in-docker-i2e
If the volume mount folder does not exist on your machine, docker will create it (with root user), so please ensure that it already exists and is owned by the userid / groupid you want to use.
I add an example for a dokuwiki container to explain it better:
version: '3.5'
services:
dokuwiki:
user: "${UID}" # set a specific user id so the container can write in the data dir
image: bitnami/dokuwiki:latest
ports:
- '8080:8080'
volumes:
- '/home/manuel/docker/dokuwiki/data:/bitnami/dokuwiki/'
restart: unless-stopped
expose:
- "8080"
The dokuwiki container will only be able to initialize correctly if it has write access to the host directory /home/manuel/docker/dokuwiki/data.
If on startup this directory does not exist, docker will create it for us but it will have root:root as user & group. --> Therefor the container startup will fail.
If we create the folder before starting the container
mkdir -P /home/manuel/docker/dokuwiki/data
and then check with
ls -nla /home/manuel/docker/dokuwiki/data| grep ' \.$'
which uid and gid the folder has - and check that they match the ones we put in our .env file in step 3. above.
The bad news is there's no owner/group/permission settings for volume 😢. The good news is the following trick will let you bake it into your config, so it's fully automated 🎉.
In your Dockerfile, create an empty directory in the right location and with desired settings.
This way, the directory will already be present when docker-compose mounts to the location. When the server mounts during boot (based on docker-compose), the mounting action happily leaves those permissions alone.
Dockerfile:
# setup folder before switching to user
RUN mkdir /volume_data
RUN chown postgres:postgres /volume_data
USER postgres
docker-compose.yml
volumes:
- /home/me/postgres_data:/volume_data
source
First determine the uid of the www-data user:
$ docker exec DOCKER_CONTAINER_ID id
uid=100(www-data) gid=101(www-data) groups=101(www-data)
Then, on your docker host, change the owner of the mounted directory using the uid (100 in this example):
chown -R 100 ./
Dynamic Extension
If you are using docker-compose you may as well go for it like this:
$ docker-compose exec SERVICE_NAME id
uid=100(www-data) gid=101(www-data) groups=101(www-data)
$ chown -R 100 ./
You can put that in a one-liner:
$ chown -R $(docker-compose exec SERVICE_NAME id -u) ./
The -u flag will only print the uid to stdout.
Edit: fixed casing error of CLI flag. Thanks #jcalfee314!
Adding rw to the end of the volume mount worked for me:
services:
httpd:
image: apache-image
ports:
- "80:80"
volumes:
- "./:/var/www/app:rw"
links:
- redis
command: /setupApacheRights.sh
Set user www-data for this compose service
user: "www-data:www-data"
Example:
wordpress:
depends_on:
- db
image: wordpress:5.5.3-fpm-alpine
user: "www-data:www-data"
container_name: wordpress
restart: unless-stopped
env_file:
- .env
volumes:
- ./wordpress/wp-content:/var/www/html/wp-content
- ./wordpress/wp-config-local.php:/var/www/html/wp-config.php
If your volumes create ownership issue then you might need to find your volume mount path by
cmd: docker volume ls
After that identify your volume name then inspect your mount path
cmd: docker volume inspect <volume name>
check your mount point there and go on mount point on your docker host machine.
where check ownership of volume by
cmd: ls -l
if it's suggest root:root then change owneship here to your docker user.
cmd: chown docker_user_id:docker_group_id -R volume_path
Note: you can find your docker user id & user group id by entering into your docker bash & hit "id" cmd.
cmd: docker-compose run --rm <container_name> bash
cmd: id
output: uid=102(www-data) gid=102(www-data) groups=102(www-data)
Find similar thread here. https://www.hamaraweb.com/sms/407/docker-volume-ownership-issue-errno-13-permission-denied-bgb6ld/

Resources