VScode remote-container extension to docker container - build results root - docker

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

Related

docker: /opt/docker folder not created

I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?
Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.

Copied ipynb opens in read-only mode within docker container

I'm running Docker Desktop on Windows 10. I used the repository for the Fonduer Tutorials to create an image to run with docker. This works fine so far and I am able to run the notebooks which are included in the repository.
I now would like to copy some jupyter notebooks and other data from the host to the container called fonduer-tutorials-jupyter-1 to be able to make use of the fonduer framework.
I am able to copy the files to the container and also to open the jupyter notebooks, but they unfortunately do open in read-only mode.
How can I copy files from host to container and still have permission to write on a windows machine?
I read a lot about options like chown and other flags to use with COPY, but it seems like they're not available on windows machines.
Let's assume my UID received with id -u is 1000 and my GID received with id -g is 2000 if that is relevant to a solution.
To prevent copying the files manually and avoid the linked access restrictions described above a better solution is to map the host directory to a volume within the container via .yml-File, in this case the docker-compose.yml. To do so, the following needs to be added to the .yml-File.
volumes:
- [path to host directory]:[container path where the files should be placed in]
With this the files will be available both, within the container as well as on host.

How to attach VSCode to a remote Docker container while setting the correct user

I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

docker install on private network without network instructions

I have to install Docker on windows 7 in a private netwrok with no internet access.
I can download anything and bring it in by usb from another computer.
How do I intall and use docker?
Meaning: From installation, (what to install and how to setup) to creating the first image.
Most of the instruction I found use proxy and I cant use a proxy.
The installation itself involve copying docker-machine-Windows-x86_64.exe, renaming it to docker-machine.exe, and creating a virtualbox machine with it.
The issue is that it will attempt to download boot2docker.iso (the tinyCore-based linux image which includes docker pre-installed)
That means you need to copy that file on your usb key first, from boot2docker/boot2docker/releases.
From issue 539:
docker-machine create mydocker --virtualbox-boot2docker-url=file:///Users/auser/Downloads/boot2docker.iso --driver=virtualbox
You will need a similar docker setup on a machine with internet access in order to:
docker pull the images you want
docker save them
copy them on the USB key
copy them on your offline server, in C:\Users... (which is the only folders mounted in boot2docker VM)
Then you need to open an ssh session
docker-machine ssh default
And within that session, you can access the folder where the saved images are copied, and docker load them.

Resources