Permission denied when spinning up a neo4j container via colima - docker

I recently switched from Docker Desktop to colima and I've been unable to start a neo4j container eversince. When I run docker-compose, I get the following errors in docker logs, causing neo4j to crash:
> docker logs neo4j
Changed password for user 'neo4j'.
chown: /data/dbms/auth.ini: Permission denied
chown: /data/dbms: Permission denied
chown: /data/dbms: Permission denied
chown: /data: Permission denied
chown: /data: Permission denied
Previously, the same code worked fine with the Docker Desktop set-up. Any ideas how can I fix this?
I have tried the following:
Verified that read/write permissions are there for the signed-in user on the corresponding files and directories mentioned in the logs above.
Tried reinstalling colima, docker and docker-compose.
Cross-checked permissions on the relevant folders for these tools (/.colima, /.docker etc.)
Running all commands with "sudo" wherever applicable
Tried deleting the /data/ directory mentioned in the logs so it can be re-generated properly
Turning it off and on :P

I was able to find a solution and I'm writing this here for future reference of other users who might come across the same issue. The core of the issue lies with bind mounted volumes. Previously, docker desktop had elevated privileges / permissions but now after shifting over to colima, the same privileges were no longer there.
User permissions weren't being passed on properly to the containers, resulting in them being unable to access the binded volumes on the host machine. The solution is to add a user:group or uid:gid mapping in the docker run command or the docker-compose file etc.
user: "<uid>:<gid>"
In a docker-compose file, it would look like this:
version: '3.4'
services:
neo4j:
image: neo4j:3.5.5
container_name: neo4j
ports:
- 7474:7474
- 7687:7687
volumes:
- ./example/docker/neo4j/conf:/conf
- ./.local/neo4j/data:/var/lib/neo4j/data
user: '1000'
group_add:
- '1000'
For further information, please go through the following docs/threads:
https://docs.docker.com/engine/reference/run/#user
https://docs.docker.com/storage/volumes/
https://github.com/abiosoft/colima/issues/54

Related

PostGIS Docker root Access

How to build postgis container as root user using docker-compose up?
In Dockerfile, separate attempts to set USER to root as well 0 did not work.
Updating docker-compose service with user: '0' was tried to no avail.
There error I am getting is Permission denied.
The id -u is always running as 999 during the build. This seems to be a system user with limited privilege.
I would prefer to just run docker-compose up with no flags and keep all configurations in docker-compose.yml and/or Dockerfile.
Dockerfile
FROM postgis/postgis:13-3.3
USER root
COPY ./startup.sh /docker-entrypoint-initdb.d/startup.sh
NOTE:
I realized that I should have added more context. I created another post that better describes the issue.
Open SSH tunnel during PostGIS Docker build
Please be carefull with root, it can inject vulnerabilities in your database.
I highly recommend to you to use it only for development.
You can run this command below:
docker-compose up --user root
Or put it in your docker-compose file:
services:
postgis:
user: root

Docker dpage/pgAdmin - permission denied /var/lib/pgadmin/storage

i have a dpage/pgadmin:latest Docker container running on an Ubuntu 20.04.
Client: Docker Engine - Community
Version: 20.10.8
Server: Docker Engine - Community
Engine:
Version: 20.10.8
docker-compose version 1.25.4, build 8d51620a
Now i want to manually save my backups from my database created with pgadmin to another location.
The backups created with pgadmin are stored in /var/lib/pgadmin/storage. So i add this path to my docker volumes in my docker-compose.yml.
volumes:
- pgadmin_dat:/var/lib/pgadmin
- /etc/localtime:/etc/localtime:ro
- pgadmin_share:/var/lib/pgadmin/storage
When i now start the container i get this error:
werkzeug.exceptions.InternalServerError: 500 Internal Server Error: The user does not have permission to read and write to the specified storage directory.
This error occurs because the directory in my volume got root:root as owner and not 5050:5050 like mentioned here:
pgAdmin Container Deployment
When i change the permissions on my volume with sudo chown -R 5050:5050 <pgadmin_share> and sudo chmod -R u+rwx the error disappears and pgAdmin is available.
When i create a backup i have to set the permissions again on the automatically created subdiretory (username of the pgadmin user).
Is there a problem with the nested volume, because i mounted already pgadmin_dat:/var/lib/pgadmin ?
I'm curious about how i can solve this better.
I would be happy about any suggestion.

Need mkdir permission for EC2 docker container

I have a docker-compose which runs 3 containers:
selenium/hub
selenium/node-chrome
My own image of a java program that uses the 2 above containers to log into a website, navigate to a download page, click on a check-box, then click on a submit button, that will cause a file to be downloaded.
Everything runs fine on my pc, but on an EC2 instance the chrome node gets the error:
mkdir: cannot create directory '/home/selsuser'
and then other errors trying to create sub-directories.
How can I give a container mkdir permissions?
I would like to run this as an ECS-Fargate task, so I would also need to give a container mkdir permissions within that task.
Thanks
Well,
Thank you for the details. It seems indeed you need rights you do not have. What you can try is to create a user group and share it accross your container.
To do so,
Create a groupe user with a GID that does not already exists (enter id on your terminal to see all the existing GID). We will assume 500 is not already used:
chown :500 Downloads
Then, give the appropriate rights to your new group and make all the subfolders having the right of your created group:
chmod 665 Downloads && chmod g+s Downloads
(If you want to be at ease you can always give full permission, up to you)
Then share the rights with a group created in the container thanks to a Dockerfile (replace <username> and <group_name> by whatever you want:
FROM selenium/node-chrome:3.141.59
RUN addgroup --gid 500 <group_name> &&\
adduser --disabled-password --gecos "" --force-badname --ingroup 500 <username>
USER <username>
Then of course don't forget to edit your docker-compose file:
selenium:
build:
context: <path_to_your_dockerfile>
Hoping it will work :)
(From the author of question)
I do have volume mapping, but I do not think there is any connection there to the problem I have. The problem is the selenium/node-chrome container wants to create the directory. On my pc, there are no problems, on EC2 it causes an error that it cannot create the directory. I assume on EC2 you need root privs to do anything on /home.
Here is the complete docker-compose file:
version: "3"
services:
hub:
image: selenium/hub:3.141.59
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59
shm_size: '1gb'
depends_on:
- hub
environment:
- HUB_HOST=hub
volumes:
- ./Downloads:/home/seluser/Downloads
migros-module:
image: freiburgbill/migros-docker
depends_on:
- chrome
environment:
- HUB_HOST=hub
- BROWSER=chrome
volumes:
- ./migros-output:/usr/share/udemy/test-output
- ./Downloads:/usr/share/udemy/Downloads
Thanks again to Paul Barrie for your input and help to get me looking closer at permissions.
For running the docker-compose file that worked on my pc, but did not work on an EC2 instance, I created a /tmp/download directory and gave it full rights (sudo chmod -R 2775 /tmp/Downloads), then it ran without any problems!
For trying to do the same thing as an ECS-Fargate Task. I created an EFS, attached the EFS to an EC2 instance so I could go into it and set the permissions on the whole EFS (sudo chmod -R 777 /mnt/efs/fs1, where that is the default path connecting the EFS to the EC2). I then created ECS-Fargate Task attaching the EFS as a volume. Then everything worked!
So in summery, the host where the docker-compose is running has to have permissions for writing the file. With Fargate we cannot access the host, so an EFS has to be given permissions for writing the file.
I know there must be a better way of locking down the security to just what is needed, but the open permissions does work.
It would of been good if I could of changed the permissions of the Fargate temporary storage and used the bind mount, but I could not find a way to do that.

docker volume permissions in neo4j

I'm having a bit of bother with the following docker compose setup:
version: '2.4'
services:
graph_dev:
image: neo4j:3.5.14-enterprise
container_name: amlgraph_dev
user: 100:101
ports:
- 7474:7474
- 7473:7473
- 7687:7687
volumes:
- neo4jbackup:/backup
- neo4jdata:/data
volumes:
neo4jbackup:
neo4jdata:
I would like to run the neo4j-admin command, which must be run as the user 100 (_apt). However, the volume I need to backup to neo4jbackup, is mounted as root and _apt can't write there.
How do I create a volume that _apt can write to? The user _apt:neo4j obviously does not exist on the host. There are no users for which I have root on the docker image.
I can think of two options,
run neo4j docker container as a valid LINUX user and group and give that user access to a backup folder. Here is what my script looks like (I don't use compose currently) to run neo4j in docker under the current user
docker run
--user "$(id -u):$(id -g)"
Here is an article that covers doing the same thing with compose
https://medium.com/faun/set-current-host-user-for-docker-container-4e521cef9ffc
(hacky?) but you could run neo4j-admin outside docker, or in another container in a process that does have access to the backup volume? (I hear you want to run it as root?)
but of course I'm wondering why the backup process or db backup would be owned by root (as opposed to owned by a db owner or backup account...) Personally I feel it is best practice to avoid using root account, whenever possible.
I ended up solving this problem by running the command as _apt as required (docker-compose run graph_dev) and the using docker exec -it -u neo4j:neo4j graph_dev /bin/bash to copy the file over to the backup directory. Not elegant but works.

Is there a better way to avoid folder permission issues for docker containers launched from docker compose in manjaro?

Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.

Resources