We're running DBT in Airflow on a GCP Compute Engine using Docker and docker-compose. In the docker-compose.yml file for our Airflow deployment, the DBT repo is attached as a volume:
webserver:
...
volumes:
- ../our-dbt-repo:/usr/local/airflow/dbt
Running dbt-run usually generates a /logs directory with DBT logs. However, running dbt-run from the docker container on the GCP machine is throwing the error [Errno 13] Permission denied: 'logs'.
To test permissions, I ran docker exec -it <DockerContainerID> bash from the command line of the GCP machine (while the docker container is running) to get into the running docker container, and ran cd /usr/local/airflow/dbt/ && touch file.txt, and received the error: touch: cannot touch 'file.txt': Permission denied. So it seems clear that no files can be added to the /dbt folder that was added as a volume in the Docker Container, which is why the logs cannot get written.
Is there a way to give our /dbt volume permissions to write logs? Perhaps we can write DBT logs somewhere else (not in the container on the GCP server), that way there are no writes required in the /dbt volume on the container?
Related
I run a QNAP TS-453a at home running Container Station on it. Suddenly multiple containers got some sort of error about "Permission Denied".
For example postgres\postgres returns:
error: exec failed: permission denied
nodered/node-red docker returns: exec ./entrypoint.sh: permission denied. And this continues in different forms for a total of 20 containers. Basically every container returns permission errors on the docker-entrypoint. I shared my docker.sock with 3 containers to maintain it: HomeAssistant, WatchTower and Portainer.
What I tried:
Recreate the container from scratch
Checked the permissions on the shared volumes
Reinstalled Container Station/Docker
Restart the QNAP NAS
sudo chmod 666 /var/run/docker.sock
I want to run a Postman collection from a Docker image in a Gitlab CI pipeline. The Docker socket file is already mounted for the gitlab-ci-runner so the runner has access to the docker server.
Here's the job definition from .gitlab-ci.yaml
postman:
image: docker:20.10.14
stage: schedule
only:
- schedules
before_script: []
script:
- env
- docker run -t -v $CI_PROJECT_DIR:/etc/newman postman/newman run postman_collection.json
The console output of the gitlab CI runner looks like this:
$ docker run -t -v $CI_PROJECT_DIR:/etc/newman postman/newman run postman_collection.json
error: collection could not be loaded
unable to read data from file "postman_collection.json"
ENOENT: no such file or directory, open 'postman_collection.json'
The file exists. I even tried
docker run --rm -it -v $PWD:/etc/newman --entrypoint sh postman/newman
from my localhost console and ran it manually. It's all there. What am I missing?
List item
The Docker socket file is already mounted for the gitlab-ci-runner
The problem here is that in the scenario where you are talking to the host docker daemon (i.e., when the host docker socket in mounted in the job), when you pass the volume argument to docker run like -v /source/path:/container/path the /source/path part of the argument refers to the host filesystem not the filesystem of the job container.
Think about it like this: the host docker daemon doesn't know that its socket is mounted inside the job container. So when you run docker commands this way, it's as if you're running the docker command on the runner host!
Because $PWD and $CI_PROJECT_DIR in your job command evaluates to a path in the job container (and this path isn't on the runner host filesystem) the volume mount will not work as expected.
This limitation is noted in the limitations of Docker socket binding documentation:
Sharing files and directories from the source repository into containers may not work as expected. Volume mounting is done in the context of the host machine, not the build container
The easiest workaround here would likely be to use postman/newman as your image: instead of docker.
myjob:
image:
name: postman/newman
entrypoint: [""]
script:
- newman run postman_collection.json
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
After creating a docker image as follow:
PS> docker run -d -p 1433:1433 --name sql1 -v sql1data:C:/sqldata -e sa_password=MyPass123 -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
I stopped my container and copied a backup file into my volume:
PS> docker cp .\DataBase.bak sql1:C:\data
After that I can no longer start my container, the error message is as follows:
Error response from daemon: container 5fe22f4ac151d7fc42541b9ad2142206c67b43579ec6814209287dbd786287dc encountered an error during Start: failure in a Windows system call: Le système de calcul s’est fermé de façon inattendue. (0xc0370106)
Error: failed to start containers: sql1
I can start and stop any other container, the problem occurs only after copying the file into the volume.
I'm using windows containers
my docker version is 18.06.0-ce-win72 (19098)
The only workaround i found is to not copy any files into my container volume.
Seems like it's because of files ownership and permissions. When you make a backup with copying the files and use those files for a new Docker Container, MYSQL Daemon in your Docker Container finds that the ownership and permissions of it's files are changed.
I think the best thing to do is to create a raw MySQL Docker Container and see who is the owner of your backup files in that container (i guess it must be 1000). then change the owner of your backup files to that user id and then create a Container with Volumes mapped to your backup files.
I am running a mongo container to dump a database. I cannot use sudo for the following docker command. How can I allow it to create that directory? Is there any workaround this?
docker run --name mongocontainer65 mongo mongodump --host HOST —username ADMN —password PASS --authenticationDatabase admin --ssl --db cmsDev --out /tmp/jenkins_workspaces/full_deploy/
2018-05-01T11:57:29.285+0000 Failed: error dumping metadata: error creating directory for metadata file
/tmp/jenkins_workspaces/full_deploy/cmsDev: mkdir
/tmp/jenkins_workspaces/full_deploy/cmsDev: permission denied
/tmp/jenkins_workspaces/full_deploy/ I guess is a directory on your host machine. I think you have missed the mount the host volume to the container.