How can I configure go sdk and GOPATH from docker container? - docker

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.

At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

Related

docker: /opt/docker folder not created

I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?
Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.

VScode remote-container extension to docker container - build results root

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

Mounting development docker container directory on host

I am using docker for software development, as I can bundle all my dependencies (compilers, libraries, ...) within a nice contained environment, without polluting the host.
The way I usually do things (which I guess is pretty common): I have a directory on the host that only contains the source code, which is mounted into a development container using a docker volume, where my software gets built and executed. Thanks to volumes being in sync, any changes in the source is reflected within the container.
Here is the pitfall: when using a code editor, software dependencies are considered broken as they are not accessible from the host. Therefore linting, etc... does not work.
I would like to be able to mount, let's say /usr/local/include from the container onto the host so that, be correctly configuring my editor, I can fix all the warnings.
I guess docker volume is not the solution here, because it would override the contained file system...
Also, I'm using Windows (no choice here) therefore my flow is:
Windows > Samba > Linux Host > Docker > Container
and I'd prefer not switching IDE (VS Code).
Any ideas? Thank you!
You basically wish you could reverse mount a volume from the container to the host. This is unfortunately not possible with Docker, and there are variants of this question here: How to mount a directory in docker container to host
You're stuck with copying the files from the container to the host. As far as the host path matching /usr/local/include or having to use a different folder depends upon your setup.
The easiest solution which would not require changing the docker image would be to use docker cp to copy the files.
Otherwise, you could automate this by having the image on entry (after installing all dependencies) copy the files to /tmp/include and mount a host volume to that location.
I use https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/13 to expose python libraries from inside the container to a folder locally so that neovim can read the libraries for autocomplete/jump to definitions.

Accessing node_modules after npm install inside Docker

I am running Docker with Docker Machine on Mac. I successfully set up some containers and ran npm install inside them, as explained here. This installs the node_modules inside the image and inside the container, but they are not available on the host, i.e. my IDE complains about missing node_modules.
Am I missing something? What is the best way to run npm install inside the container but be able to do development (with these dependencies) on the host?
From my docker-compose.yml:
volumes:
- /Users/andre/IdeaProjects/app:/home/app
- /home/app/node_modules
Since you are using boot2docker, only Max host folder /Users/ is mounted and accessible from the boot2docker host.
That means you would need to map /home/app/node_modules to a Mac host path starting with /Users, to see said modules on your Mac host.
volumes:
- /Users/andre/IdeaProjects/app:/home/app
- /Users/andre/node_modules:/home/app/node_modules
You will not be able to access on your host to your node_modules folder inside the container. It is not recommended to bind this folder to your host folder node_modules cause It will cause problems while building the images.
One cheap solution will be to use the Visual Studio Code Extension called "Remote-Containers". This extension will allow you to attach your Visual Studio Code to your container an edit transparently files within your container folders. To do so, it will install an internal vscode server within your development container. For more information check this link.
Ensure, however, that your volumes still are created in your docker-compose.yml file.
I hope it helps :D!

Editing Docker container FS using Atom/Sublime-Text?

I'm running OSX and Docker with the help of boot2docker.
From my understanding boot2docker is a lightweight linux distro that is running the docker containers. I have some Ubuntu containers that I use to run and test projects that should specifically run well on Linux.
However every small code change from my host text editor of choice, requires me to re-build image and re-run the container. Run the app and confirm that the change I made didn't break something.
Is there a way for me to open a Docker container FS folder in a text editor from my host machine? (a.k.a Remote edit?)
Have any of you guys done this? Any ideas will be awesome. I think about setuping SFTP or SSHD on the Docker container, but I would want your opinion?
What I often do is, in development, mount the source code of the application to its usual place in a volume. Then, I set the command (or entrypoint) of the container to a script that launches it in "development mode" (for example, by using nodemon for a node.js application, setting RAILS_ENV=development in Rails, and so on).
Volumes do work on Mac OS X (and I assume Windows) under boot2docker or docker-machine, with the caveat that you need to be working somewhere beneath your home directory.
For a concrete example, here's a repository that I set this up in. The ingredients:
script/dev is my "dev-mode" entrypoint. It launches the main application under nodemon.
When I launch the container, I mount the source directory into the container as a volume and set script/dev as the command. (I'm using docker-compose here to launch and link in an upstream dependency, so I can do everything in one command.)
With those two things in place, I can run docker-compose up, make a source change in whatever editor I choose on my host, save the file, and the service within the container auto-reloads to bring my changes into effect. Presto!

Resources