docker: /opt/docker folder not created - docker

I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?

Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.

Related

VScode remote-container extension to docker container - build results root

I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?

How can I manage the SELinux context of a docker container?

tl;dr: Is there a way to run a docker container in the callers SELinux context?
I am running a bunch of different docker images on different (mostly RH-based) servers for building/testing purposes.
Imagine an automated flow like this:
checkout sources
run on CentoOS 7
run on Ubuntu 18.04
run on Fedora 30
One particular feature of this setup is that all docker containers work on the same (versioned) source files, bind-mounted into /src. Earlier on, I discovered I had to supply the SELinux mount options :Z or :z for the container to get access to the files checked out on the host.
Due to a design decision, the containers also have access to the other container's build artifacts through src. And apparently the relabeling of /src/ can take minutes on a host system that has a rather large build history.
I could of course try to restructure things or use --security-opt label:disable on the containers. But I wondered why the relabeling is necessary in the first place. Could I not simply run the containers in my context? They basically work like a sophisticated chroot in that setup and do not expose any public services.
Bonus question: What, exactly, does --security-opt label:disable do?

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

Jenkins-node as docker container

The jenkins-node is a docker-container on which the jobs are run. A jenkins-job running in the dockerized jenkins-node checks the project of svn/git and runs the build and test in other docker-containers launched by the job. In doing so the jenkins-job mounts via "docker run -v : ..." files/directories from the checked out project into the build-container. This sounds like docker-in-docker, but according to http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ docker-in-docker is not good in ci. With the recommended approach (mount the hosts docker-socket into the jenkins-node container) I'm facing the problem that the mounted files in the build-container appear as empty direcotries. I think it's because these files are not known in the host (they are checked out inside the jenkins-node container). Providing the --privileged flag doesn't help this way.
However the 'evil' docker-in-docker approach works fine with this scenario. Am I doing s.th. wrong or is docker-in-docker the way to go here?
With "expose the docker socket" approach all volume paths are going to be relative to the host. So if you need to access something in the jenkins-node container you have two options:
make sure the checkout directory is a volume, and use --volumes-from jenkins-node as an argument to all the other docker containers. From your question it sounds like the containers created by the test suite would be configured from the app repos, so this is probably not a good option.
make the checkout directory a host mounted volume -v /git/checkouts:/path/in/jenkins-node/container when you start jenkins-node. That way the files will actually end up on the host (not in the jenkins-node container), and you'll be able to access them a the host path.
I would also say that the article you're referencing is more of a caution. dind is still done quite a bit, sometimes it's even necessary. It's not the worst thing ever, just be aware that it's not a silver bullet and does come with it's own set of issues/problems.

Editing Docker container FS using Atom/Sublime-Text?

I'm running OSX and Docker with the help of boot2docker.
From my understanding boot2docker is a lightweight linux distro that is running the docker containers. I have some Ubuntu containers that I use to run and test projects that should specifically run well on Linux.
However every small code change from my host text editor of choice, requires me to re-build image and re-run the container. Run the app and confirm that the change I made didn't break something.
Is there a way for me to open a Docker container FS folder in a text editor from my host machine? (a.k.a Remote edit?)
Have any of you guys done this? Any ideas will be awesome. I think about setuping SFTP or SSHD on the Docker container, but I would want your opinion?
What I often do is, in development, mount the source code of the application to its usual place in a volume. Then, I set the command (or entrypoint) of the container to a script that launches it in "development mode" (for example, by using nodemon for a node.js application, setting RAILS_ENV=development in Rails, and so on).
Volumes do work on Mac OS X (and I assume Windows) under boot2docker or docker-machine, with the caveat that you need to be working somewhere beneath your home directory.
For a concrete example, here's a repository that I set this up in. The ingredients:
script/dev is my "dev-mode" entrypoint. It launches the main application under nodemon.
When I launch the container, I mount the source directory into the container as a volume and set script/dev as the command. (I'm using docker-compose here to launch and link in an upstream dependency, so I can do everything in one command.)
With those two things in place, I can run docker-compose up, make a source change in whatever editor I choose on my host, save the file, and the service within the container auto-reloads to bring my changes into effect. Presto!

Resources