How to build postgis container as root user using docker-compose up?
In Dockerfile, separate attempts to set USER to root as well 0 did not work.
Updating docker-compose service with user: '0' was tried to no avail.
There error I am getting is Permission denied.
The id -u is always running as 999 during the build. This seems to be a system user with limited privilege.
I would prefer to just run docker-compose up with no flags and keep all configurations in docker-compose.yml and/or Dockerfile.
Dockerfile
FROM postgis/postgis:13-3.3
USER root
COPY ./startup.sh /docker-entrypoint-initdb.d/startup.sh
NOTE:
I realized that I should have added more context. I created another post that better describes the issue.
Open SSH tunnel during PostGIS Docker build
Please be carefull with root, it can inject vulnerabilities in your database.
I highly recommend to you to use it only for development.
You can run this command below:
docker-compose up --user root
Or put it in your docker-compose file:
services:
postgis:
user: root
Related
I have a docker-compose which runs 3 containers:
selenium/hub
selenium/node-chrome
My own image of a java program that uses the 2 above containers to log into a website, navigate to a download page, click on a check-box, then click on a submit button, that will cause a file to be downloaded.
Everything runs fine on my pc, but on an EC2 instance the chrome node gets the error:
mkdir: cannot create directory '/home/selsuser'
and then other errors trying to create sub-directories.
How can I give a container mkdir permissions?
I would like to run this as an ECS-Fargate task, so I would also need to give a container mkdir permissions within that task.
Thanks
Well,
Thank you for the details. It seems indeed you need rights you do not have. What you can try is to create a user group and share it accross your container.
To do so,
Create a groupe user with a GID that does not already exists (enter id on your terminal to see all the existing GID). We will assume 500 is not already used:
chown :500 Downloads
Then, give the appropriate rights to your new group and make all the subfolders having the right of your created group:
chmod 665 Downloads && chmod g+s Downloads
(If you want to be at ease you can always give full permission, up to you)
Then share the rights with a group created in the container thanks to a Dockerfile (replace <username> and <group_name> by whatever you want:
FROM selenium/node-chrome:3.141.59
RUN addgroup --gid 500 <group_name> &&\
adduser --disabled-password --gecos "" --force-badname --ingroup 500 <username>
USER <username>
Then of course don't forget to edit your docker-compose file:
selenium:
build:
context: <path_to_your_dockerfile>
Hoping it will work :)
(From the author of question)
I do have volume mapping, but I do not think there is any connection there to the problem I have. The problem is the selenium/node-chrome container wants to create the directory. On my pc, there are no problems, on EC2 it causes an error that it cannot create the directory. I assume on EC2 you need root privs to do anything on /home.
Here is the complete docker-compose file:
version: "3"
services:
hub:
image: selenium/hub:3.141.59
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59
shm_size: '1gb'
depends_on:
- hub
environment:
- HUB_HOST=hub
volumes:
- ./Downloads:/home/seluser/Downloads
migros-module:
image: freiburgbill/migros-docker
depends_on:
- chrome
environment:
- HUB_HOST=hub
- BROWSER=chrome
volumes:
- ./migros-output:/usr/share/udemy/test-output
- ./Downloads:/usr/share/udemy/Downloads
Thanks again to Paul Barrie for your input and help to get me looking closer at permissions.
For running the docker-compose file that worked on my pc, but did not work on an EC2 instance, I created a /tmp/download directory and gave it full rights (sudo chmod -R 2775 /tmp/Downloads), then it ran without any problems!
For trying to do the same thing as an ECS-Fargate Task. I created an EFS, attached the EFS to an EC2 instance so I could go into it and set the permissions on the whole EFS (sudo chmod -R 777 /mnt/efs/fs1, where that is the default path connecting the EFS to the EC2). I then created ECS-Fargate Task attaching the EFS as a volume. Then everything worked!
So in summery, the host where the docker-compose is running has to have permissions for writing the file. With Fargate we cannot access the host, so an EFS has to be given permissions for writing the file.
I know there must be a better way of locking down the security to just what is needed, but the open permissions does work.
It would of been good if I could of changed the permissions of the Fargate temporary storage and used the bind mount, but I could not find a way to do that.
I'm having a bit of bother with the following docker compose setup:
version: '2.4'
services:
graph_dev:
image: neo4j:3.5.14-enterprise
container_name: amlgraph_dev
user: 100:101
ports:
- 7474:7474
- 7473:7473
- 7687:7687
volumes:
- neo4jbackup:/backup
- neo4jdata:/data
volumes:
neo4jbackup:
neo4jdata:
I would like to run the neo4j-admin command, which must be run as the user 100 (_apt). However, the volume I need to backup to neo4jbackup, is mounted as root and _apt can't write there.
How do I create a volume that _apt can write to? The user _apt:neo4j obviously does not exist on the host. There are no users for which I have root on the docker image.
I can think of two options,
run neo4j docker container as a valid LINUX user and group and give that user access to a backup folder. Here is what my script looks like (I don't use compose currently) to run neo4j in docker under the current user
docker run
--user "$(id -u):$(id -g)"
Here is an article that covers doing the same thing with compose
https://medium.com/faun/set-current-host-user-for-docker-container-4e521cef9ffc
(hacky?) but you could run neo4j-admin outside docker, or in another container in a process that does have access to the backup volume? (I hear you want to run it as root?)
but of course I'm wondering why the backup process or db backup would be owned by root (as opposed to owned by a db owner or backup account...) Personally I feel it is best practice to avoid using root account, whenever possible.
I ended up solving this problem by running the command as _apt as required (docker-compose run graph_dev) and the using docker exec -it -u neo4j:neo4j graph_dev /bin/bash to copy the file over to the backup directory. Not elegant but works.
Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.
I want create docker-containers with volumes and custom group. But faced with mistake with permission inside container. All file is have for example 'custom-group' and work fine, but the Document folder is have by default root group. I think this due to volumes. How to Document folder set 'custom-group'. My code is below
volumes:
- /base/documents:/app/documents:rw
The uid/gid inside of the container is typically the same as the uid/gid outside of the container, on the host (user namespaces are off by default and wouldn't solve this problem, in fact they would make it worse). Note that this is uid/gid, and not user name and group name, since containers can have their own /etc/passwd and /etc/group files.
You need to run your container with the uid/gid matching that of the files on your host. That is done with the user section of a compose file, or -u option on the docker run command line, e.g.:
docker run -u "$(id -u):$(id -g)" -v /base/documents:/app/documents:rw ...
or in the compose file:
user: "1000:1000"
If your application must run as root, then there are a lot fewer options to handle this. My own preference is to dynamically handle the uid/gid from an entrypoint that starts up as root, fixes the uid/gid inside the container to match the host (looking at the volume owner), and then drops from root to the container user for running the application. For more details on how that's done, you can see my base image repo, including the nginx example and fix-perms script.
use root user in your docker-compose to get full permission
EX:-
node-app:
container_name: node-app
image: node
user: root
volumes:
- ./:/home/node/app
- ./node_modules:/home/node/app/node_modules
- ./.env.docker:/home/node/app/.env
NOTE:- user: root => gives you a full permission of your volumne
I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp