I have WSL and Docker for Windows installed and I typically used docker as follow:
docker run -v "$(pwd -P):/srv" -w/srv image make
When I run on my container things such as npm init or sphinxbuild I sometime have files created that I cannot remove once the container is terminated.
For the current case I was running Python sphinx and it created a file _build/doctrees/environment.pickle that I cannot remove unless I restart my computer. When I look at the permissions I can read
You do not have permission to view or edit this object's permission settings
However, I am running this panel in administrator mode.
So two questions:
Why docker sometime create such corrupted files?
How to prevent docker to do this?
Related
Anyone know if Docker somehow caches files/file systems? And if it does, is there a way to suppress that or force it to regenerate/refresh the cache. I am editing my files outside my docker image, but the directory inside the docker image doesn't seem to include them in it. However, outside the docker image the "same" directory does include the files. The only thing that makes sense to me is that docker has an internal "copy" of the directory and isn't going to the disk, so it sees an outdated copy of the directory before the file was added.
Details:
I keep my "work" files in a directory on a separate partition (/Volumes/emacs) on my MacBook, i.e.:
/Volumes/emacs/datapelago
I do my editing in emacs on the MacBook, but not in the docker container. I have a link to that directory called:
/projects
And I might edit or create a file called:
/projects/nuc-440/qflow/ToAttribute.java
In the meantime I have a docker container I created this way:
docker container create -p 8080:8080 -v /Volumes/emacs/datapelago:/projects -v /Users/christopherfclark:/Users/cfclark --name nuc-440 gradle:7.4.2-jdk17 tail -f /dev/null
I keep a shell running in that container:
docker container exec -it nuc-440 /bin/bash
cd /projects/nuc-440
And after making changes I run the build/test sequence:
gradle clean;gradle build;gradle test
However, recently I have noticed that when I make changes, add files, they don't always get reflected inside the docker container, which of course can cause the build to fail or the test not to pass etc.
Thus, this question.
I'd rather not have to start/stop the container each time and instead just keep it running and tell it to "refetch" the projects/nuc-440 directory and its children.
I restarted the docker container and it is now "tracking" file changes again. That is I can make changes in MacOS and they are reflected inside docker with no additional steps. I don't seem to have to continually restart it. It must have gotten into a "wierd" state. However, I don't have any useful details beyond that. Sorry.
I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer
I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.
I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.
I am running a Docker container in CoreOS (host) and mounted a host folder with a container's folder.
docker run -v /home/core/folder_name:/folder_name <container_name>
Now, each time I am changing (insert/delete) some file in that host folder (folder_name), I have to restart the container (container_name) to see the effects.
docker restart <container_name>
Is there any way from the host side or docker side to restart it automatically when there is a change (insert/delete) in the folder?
Restarting the docker container on a folder change is rather antithetical to the whole notion of the -v command in the first place. If you really really really need to restart the container in the manner you are suggesting then the only way to do it is from the docker host. There are a couple tools (I can name off the top of my head, there are definitely more) you could use to monitor the host folder and when a file is inserted or deleted you could trigger the docker restart <container_name> command. Those tools are incron and inotify-tools. Here is another question someone asked similar to yours and the answer recommended using one of the tools I suggested.
Now, there is no way that the files in the host folder are not being changed in the docker container as well. It must be that the program you are using in the docker container isn't updating it's view of the /folder_name folder after it starts up. Is it possible for you to force the program you are running in the docker container to refresh or update? The -v command works via bind mounting and has been a stable feature in docker for quite a while. With bind mounting, the home/core/folder_name folder IS (for all practical purposes) the same folder as /folder_name in the container.
run the command
docker run -t -i -v /home/core/folder_name:/folder_name <container_name> /bin/sh
This command gives you an interactive shell within the container. In this shell issue the command:
cd /folder_name; touch a_file
Now go to /home/core/folder_name on the docker host in a shell or some file browser. The file a_file will be there. You can delete that file on the host and go back to the shell running in the docker container and run ls /folder_name. The file a_file will not be there.
So, you either need to use inotify or incron to go about restarting your container anytime a file changes on the host, or figure out how to work with the program you are running in the docker container to have it update its view of the /folder_name folder.