Accessing node_modules after npm install inside Docker - docker

I am running Docker with Docker Machine on Mac. I successfully set up some containers and ran npm install inside them, as explained here. This installs the node_modules inside the image and inside the container, but they are not available on the host, i.e. my IDE complains about missing node_modules.
Am I missing something? What is the best way to run npm install inside the container but be able to do development (with these dependencies) on the host?
From my docker-compose.yml:
volumes:
- /Users/andre/IdeaProjects/app:/home/app
- /home/app/node_modules

Since you are using boot2docker, only Max host folder /Users/ is mounted and accessible from the boot2docker host.
That means you would need to map /home/app/node_modules to a Mac host path starting with /Users, to see said modules on your Mac host.
volumes:
- /Users/andre/IdeaProjects/app:/home/app
- /Users/andre/node_modules:/home/app/node_modules

You will not be able to access on your host to your node_modules folder inside the container. It is not recommended to bind this folder to your host folder node_modules cause It will cause problems while building the images.
One cheap solution will be to use the Visual Studio Code Extension called "Remote-Containers". This extension will allow you to attach your Visual Studio Code to your container an edit transparently files within your container folders. To do so, it will install an internal vscode server within your development container. For more information check this link.
Ensure, however, that your volumes still are created in your docker-compose.yml file.
I hope it helps :D!

Related

VScode remote-container extension to docker container - build results root

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

VS Code dev container mounted directory is empty

I have a devcontainer compose project that requires mongo and a replica server. This requires a few mongosh commands to be run, which I'd like to do in a separate container as a bash script.
My issue is that when using "Clone repository into Container volume" my mounted directory is empty. This works fine when I first check the repo out locally and then build the container from that.
Here is a demo repository that shows the issue: https://github.com/jrj2211/vscode-remote-try-node-mongo-compose
In this project, the compose file mounts the .devcontainer directory. The file I need is at the path: .devcontainer/scripts/mongosetup.sh.
volumes:
- ./scripts:/scripts
This produces the correct result locally but the folder is empty when in a docker volume.
What is the correct path to the folder location in the WSL2 volume? Is there a way to make this work both locally and cloned in a docker volume?
I tried to set an ENV variable from the devcontainer.json that pointed to ${workspaceFolder} but that ended up as an empty string in compose.
This documentation makes me believe this should work this way which is linked to from the 2nd link that talks about "Clone Repository in Container Volume":
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount
https://code.visualstudio.com/remote/advancedcontainers/improve-performance
I was able to get this working through the use of #h4l brilliant code. This takes the containerWorkspaceFolder and localWorkspaceFolder and turns them into environment variables available in docker-compose. This has the added benefit of continuing to work both locally or in a container.
https://github.com/h4l/dev-container-docker-compose-volume-or-bind
Hopefully soon those variables become available in container mode directly so additional scripts arn't needed.

Mounting development docker container directory on host

I am using docker for software development, as I can bundle all my dependencies (compilers, libraries, ...) within a nice contained environment, without polluting the host.
The way I usually do things (which I guess is pretty common): I have a directory on the host that only contains the source code, which is mounted into a development container using a docker volume, where my software gets built and executed. Thanks to volumes being in sync, any changes in the source is reflected within the container.
Here is the pitfall: when using a code editor, software dependencies are considered broken as they are not accessible from the host. Therefore linting, etc... does not work.
I would like to be able to mount, let's say /usr/local/include from the container onto the host so that, be correctly configuring my editor, I can fix all the warnings.
I guess docker volume is not the solution here, because it would override the contained file system...
Also, I'm using Windows (no choice here) therefore my flow is:
Windows > Samba > Linux Host > Docker > Container
and I'd prefer not switching IDE (VS Code).
Any ideas? Thank you!
You basically wish you could reverse mount a volume from the container to the host. This is unfortunately not possible with Docker, and there are variants of this question here: How to mount a directory in docker container to host
You're stuck with copying the files from the container to the host. As far as the host path matching /usr/local/include or having to use a different folder depends upon your setup.
The easiest solution which would not require changing the docker image would be to use docker cp to copy the files.
Otherwise, you could automate this by having the image on entry (after installing all dependencies) copy the files to /tmp/include and mount a host volume to that location.
I use https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/13 to expose python libraries from inside the container to a folder locally so that neovim can read the libraries for autocomplete/jump to definitions.

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

Is there a way to replicate pwd in a volume mount for docker in a boot2docker context?

So currently I can do: docker -v .:/usr/src/app or even specify it in my docker-compose.yml:
web:
volumes:
- .:/usr/src/app
But when I attempt to define this in my Dockerfile:
VOLUME .:/usr/src/app
It doesn't mount anything.
Now I understand the complexities in that I'm using OSX and so I have to virtualize the environment to run Docker via boot2docker, and that boot2docker solves the copy issue by mounting /User to the linux machine running Docker.
The documentation wants me to be explicit, but since my explicitness would require me to name my user (in this case /User/krainboltgreene/code/krainboltgreene/blankrails) it seems non-idiomatic, as that obviously doesn't work on other people's environments.
What's the solution for this? I mean, I can technically get this all working without (as noted above the CLI and compose works fine), but it means not being able to do project specific provisioning (bower install, npm install, vulcanize, etc).
You can't specify a host directory for a volume inside a Dockerfile, because of the portability reasons you mention (not everyone will have the same directories and there are security issues regarding mounting sensitive files).
If you instead do:
VOLUME /usr/src/app
Docker will automatically set up a volume at run-time for the folder, which will be mapped to a directory under /var/lib/docker/volumes.
If you want to be able to quickly make changes during development, I would suggest using COPY in the Dockerfile, but mounting local changes over the top with a volume at run-time. This has the disadvantage that if you volume mount a folder, all the contents of that directory in the container will be hidden (rather than merged).
The docker run -v .:/usr/src/app ... command as well as the docker-compose definitions are executing during runtime. Whereas the Dockerfile instructions are executed during build time.
By the way the instruction in your Dockerfile is syntactically incorrect. It should be VOLUME /usr/src/app instead.
That VOLUME keyword only defines that later during runtime this location will be stored on a volume. So all files that you add by further Dockerfile instructions or manual commits to that location are ignored and not added to the resulting image.
Now during runtime when you did not specify a volume it Docker will generate a volume for you which is empty by default.
To have your docker-compose setup working for other colleagues you could simply make the docker-compose configuration file being part of your blankrails project folder. Everybody then runs docker-compose from within that directory and your provided configuration will work.
EDIT:
I do not know exactly what you mean with project specific provisioning. But if your aim is to provide default contents for the defined volume you could do something like the following:
Add all required project files during the Dockerfile build to a /bootstrap folder on the image.
Instead of executing your app directly use a start shell script for CMD.
In that start script you can check whether the volume mounted to /usr/src/app is empty or not. When it is empty copy all the /bootstrap contents into it.
Afterwards start your app from within that script in foreground.
With that approach you can easily provide a default file set for mounted volumes. And when you re-use that volume e.g. after a container restart the container just works with the files that are on the volume without touching them again during startup. So modified files will be persisted.

Resources