docker-compose docker-entrypoint-initdb.d Permission denied - docker

I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help

The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.

well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer

Related

How Sentry is cleaned up correctly

My sentry version is 22.9.0.
It is downloaded through https://github.com/getsentry/self-hosted.git,Build by docker compose.
I want to clean up historical data to save space.
I checked several methods on the Internet
All are configured by docker exec -it sentry_worker_1 bash or docker exec -it sentry_postgres_1 bash
But these methods are outdated, I did not find the relevant container in my docker container
Later by viewing the configuration file
Try to modify the configuration of SENTRY_EVENT_RETENTION_DAYS in docker-compose.yml in the root directory to 7
docker-compose.yml File Directory
Modify content
After restarting (docker compose down&docker compose up -d), about 50G was cleaned up. Then go to the sentry web to check, everything has been cleared, which is obviously wrong.
enter image description here
question
How to clean up properly
Restart after modifying SENTRY_EVENT_RETENTION_DAYS, why does it still take up so much space

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

Cannot remove file after running Docker

I have WSL and Docker for Windows installed and I typically used docker as follow:
docker run -v "$(pwd -P):/srv" -w/srv image make
When I run on my container things such as npm init or sphinxbuild I sometime have files created that I cannot remove once the container is terminated.
For the current case I was running Python sphinx and it created a file _build/doctrees/environment.pickle that I cannot remove unless I restart my computer. When I look at the permissions I can read
You do not have permission to view or edit this object's permission settings
However, I am running this panel in administrator mode.
So two questions:
Why docker sometime create such corrupted files?
How to prevent docker to do this?

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Volume mount not writing out to host machine

I'm trying to set up my docker container with a host machine directory mounted to the container for writing logs out to. My dockerfile is pretty simple:
FROM golang:1.9.2
WORKDIR /
COPY daemon /
CMD ["./daemon", "-d", "debug"]
Where daemon is the compiled binary of a Go program. I'm starting the container via the following command:
docker run -d --restart unless-stopped -v /var/log/scope/:/var/log/scope/ slack_daemon
Where slack_daemon is the name of the image the above dockerfile is built into.
However, the directory /var/log/scope in the container is not being written out to the host machine. Anything I place in that directory on the host machine is visible inside the container, but neither writing to existing files or creating new files in that directory in the container is writing out to the host machine. No error is being thrown by docker or any of the commands, it just simply isn't writing anything out to the host directory.
Running Docker 17.09.0-ce-mac35, stable channel, on OSX 10.12.6.
Things I've tried:
Running the docker run command as sudo.
Adding the --privileged flag to docker run.
chmod 777 on /var/log/scope on the host machine.
I specifically need that directory in both host and container, as our code and log handler both expect that location and it would be too large of a change for this current issue to alter that.
Any guidance on this would be appreciated. Most of the sources online that I can find suggest that this is an SELinux issue, but I'm not getting the "permission denied" error, and this is OSX, not linux.
I've also checked on the /var dir itself. It aliases to /private/var:
lrwxr-xr-x# 1 root wheel 11 Oct 30 10:54:11 2017 var -> private/var
And /private is by default a shared directory within the file sharing settings in docker.

Resources