VS Code dev container mounted directory is empty - docker

I have a devcontainer compose project that requires mongo and a replica server. This requires a few mongosh commands to be run, which I'd like to do in a separate container as a bash script.
My issue is that when using "Clone repository into Container volume" my mounted directory is empty. This works fine when I first check the repo out locally and then build the container from that.
Here is a demo repository that shows the issue: https://github.com/jrj2211/vscode-remote-try-node-mongo-compose
In this project, the compose file mounts the .devcontainer directory. The file I need is at the path: .devcontainer/scripts/mongosetup.sh.
volumes:
- ./scripts:/scripts
This produces the correct result locally but the folder is empty when in a docker volume.
What is the correct path to the folder location in the WSL2 volume? Is there a way to make this work both locally and cloned in a docker volume?
I tried to set an ENV variable from the devcontainer.json that pointed to ${workspaceFolder} but that ended up as an empty string in compose.
This documentation makes me believe this should work this way which is linked to from the 2nd link that talks about "Clone Repository in Container Volume":
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount
https://code.visualstudio.com/remote/advancedcontainers/improve-performance

I was able to get this working through the use of #h4l brilliant code. This takes the containerWorkspaceFolder and localWorkspaceFolder and turns them into environment variables available in docker-compose. This has the added benefit of continuing to work both locally or in a container.
https://github.com/h4l/dev-container-docker-compose-volume-or-bind
Hopefully soon those variables become available in container mode directly so additional scripts arn't needed.

Related

using docker-compose on a kubernetes instance with jenkins - mounting empty volumes

I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?
I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

How to move a local volume onto a remote docker machine

I have my local docker machine and a remote docker machine, on the cloud. My docker-compose app has a webcontainer with this config:
web:
container_name: web
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- ./data:/usr/src/app/data
env_file: .env
command: /usr/local/bin/gunicorn --workers 4 --timeout 120 --bind :8000 app:app
The important part is that second volume. I have this local folder called data with some 10GB of data in it. I made it a volume in the first place because otherwise building the container takes forever. Now that the app is production-ready, I'd like to deploy it. One problem: now my remote web container has an empty data folder mounted in it. So how do I move data from my local machine into a container on a remote docker machine? Where do I even move it to?
It seems like there are two tools for this:
docker cp which doesn't seem like it will work for remote docker machines
docker-machine scp which seems made for this, right?
I'm almost positive I need to use the second of these, but since I don't quite understand how docker machine works or where it keeps its data, I'm not sure what destination path to use:
$ dm scp -r /Users/alex/Documents/Project/data remote-machine:/usr/src/app/data
fails with error message:
scp: /usr/src/app/data: No such file or directory
Where should I be scp'ing this data in order to have it mount properly on my remote web container?
Local path vs. in-container path
Assuming you will use the same model remotely that you used locally, keep in mind that the path /usr/src/app/data is the path inside the container. When you are copying the files from one system to another, you just need to copy them from the current system to the remote system, then put them in a path where docker-compose knows how to find them, to mount into a new container.
So all you have to do is copy them from here to there, and use the same path relative to docker-compose.yml. It only knows your external volume as ./data, so if you put the directory in the same place (from docker-compose's perspective), everything should work the same.
How to copy the files
As for how to do the copy, these are just files, so it doesn't matter. scp -r should work, or make a zipfile, copy that, unzip into the correct place, etc. There are a ton of ways to copy files, so pick whatever is simplest for your case.
What exactly needs to be copied?
In the comments you expressed confusion about local vs. remote operations in docker-machine, and what else you needed to copy. Here's a bit more full of an explanation:
On your local system (which I'm assuming is your own PC or laptop), you have docker-machine installed, and you've been using that for all of this development. Completely separate from that is your new cloud instance where you would like to deploy.
To run what you have locally already, up on your cloud instance, the cloud instance will need to have the following.
The docker-compose.yml file.
As long as you plan to use docker-compose to run this, that must be available.
Your .env file.
Since you are using an environment file in this setup, it must be available or docker-compose can't make use of it.
Your web image.
You have a build parameter for this container, but not an image parameter. So currently the only thing you can do is docker-compose build web which will locally generate an image, which docker-compose then knows how to run.
Another option is to add an image parameter, with a repository:tag, such as myuser/myapp_web:1.0, and push that up to Docker Hub. Then, on your cloud instance, the image can be retrieved from Docker Hub instead of building it locally.
In that case, you can add an image parameter to the web container in docker-compose.yml, then build it and push it up.
docker-compose build web
docker-compose push web
Then on the cloud instance, you can fetch it:
docker-compose pull web
docker-compose will know to use that image because of the image parameter in docker-compose.yml (which is also present on the cloud server).
Ref: Creating a new repository on Docker Hub
Which of these options is preferable depends on how you want to manage things. Either one would work, but the "local build" option would require you copy any required source files to your cloud instance too (anything that is used during the build process).
I don't see in your question where the postgres container comes from. If you are also custom-building this one, then the same goes as for web. If you are using a public image for this, then you shouldn't need to copy anything; docker-compose will know how to fetch it, i.e. you can do this:
docker-compose pull postgres
What about docker cp and docker-machine scp?
You mentioned docker cp and docker-machine scp in your question.
As you already determined, docker cp is not a solution here. That command is for copying files between a container and the host filesystem. It has nothing to do with copying over a network.
As far as I know, docker-machine scp is to copy files between your local host and a docker-machine-managed VM. To copy files to your cloud instance you can likely use a more generic tool like scp or sftp more easily.
Not sure as of which docker version, but contrary to the statements in the question and the #Dan_Lowe answer this works fine:
docker cp ./data container:/usr/src/app/
docker cp is a normal part of the API, so it works like any other command, even remotely.

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

Is there a way to replicate pwd in a volume mount for docker in a boot2docker context?

So currently I can do: docker -v .:/usr/src/app or even specify it in my docker-compose.yml:
web:
volumes:
- .:/usr/src/app
But when I attempt to define this in my Dockerfile:
VOLUME .:/usr/src/app
It doesn't mount anything.
Now I understand the complexities in that I'm using OSX and so I have to virtualize the environment to run Docker via boot2docker, and that boot2docker solves the copy issue by mounting /User to the linux machine running Docker.
The documentation wants me to be explicit, but since my explicitness would require me to name my user (in this case /User/krainboltgreene/code/krainboltgreene/blankrails) it seems non-idiomatic, as that obviously doesn't work on other people's environments.
What's the solution for this? I mean, I can technically get this all working without (as noted above the CLI and compose works fine), but it means not being able to do project specific provisioning (bower install, npm install, vulcanize, etc).
You can't specify a host directory for a volume inside a Dockerfile, because of the portability reasons you mention (not everyone will have the same directories and there are security issues regarding mounting sensitive files).
If you instead do:
VOLUME /usr/src/app
Docker will automatically set up a volume at run-time for the folder, which will be mapped to a directory under /var/lib/docker/volumes.
If you want to be able to quickly make changes during development, I would suggest using COPY in the Dockerfile, but mounting local changes over the top with a volume at run-time. This has the disadvantage that if you volume mount a folder, all the contents of that directory in the container will be hidden (rather than merged).
The docker run -v .:/usr/src/app ... command as well as the docker-compose definitions are executing during runtime. Whereas the Dockerfile instructions are executed during build time.
By the way the instruction in your Dockerfile is syntactically incorrect. It should be VOLUME /usr/src/app instead.
That VOLUME keyword only defines that later during runtime this location will be stored on a volume. So all files that you add by further Dockerfile instructions or manual commits to that location are ignored and not added to the resulting image.
Now during runtime when you did not specify a volume it Docker will generate a volume for you which is empty by default.
To have your docker-compose setup working for other colleagues you could simply make the docker-compose configuration file being part of your blankrails project folder. Everybody then runs docker-compose from within that directory and your provided configuration will work.
EDIT:
I do not know exactly what you mean with project specific provisioning. But if your aim is to provide default contents for the defined volume you could do something like the following:
Add all required project files during the Dockerfile build to a /bootstrap folder on the image.
Instead of executing your app directly use a start shell script for CMD.
In that start script you can check whether the volume mounted to /usr/src/app is empty or not. When it is empty copy all the /bootstrap contents into it.
Afterwards start your app from within that script in foreground.
With that approach you can easily provide a default file set for mounted volumes. And when you re-use that volume e.g. after a container restart the container just works with the files that are on the volume without touching them again during startup. So modified files will be persisted.

Resources