My main developer box is running Centos8. I'm working on a project where I need to do some builds on RHEL7/8/9. I have docker installed on the host and pulling RHEL7 image from registry.redhat.io/rhel7:7.9-702.1655292978, RHEL8 from docker hub (redhat/ubi8:latest) and RHEL9 also from docker hub (redhat/ubi9:latest). RHEL 7/8 work without issue but RHEL9 has the error:
subscription-manager is disabled when running inside a container.
Please refer to your host system for subscription management.
I have a valid subscription but for some reason, it is not possible to actually run a RHEL9 image from a non RHEL host. I'm not sure I understand the reason for this but is there a workaround (other than changing the host to RHEL) so that I can register my RHEL9 container?
Someone in my team found a solution. The article https://access.redhat.com/solutions/5870841 basically points to injecting the subscription info (from a registered system) into the container.
Here is a sample docker file I used:
FROM registry.redhat.io/ubi9/ubi
COPY rhel9_sub/redhat.repo /run/secrets/redhat.repo
COPY rhel9_sub/rhsm /run/secrets/rhsm
COPY rhel9_sub/entitlement /run/secrets/etc-pki-entitlement
where the rhel9_sub folder I was copying from came from my registered RHEL9 host.
I can now query the repo and pull kernel packages into the container without issue.
Related
I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?
Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.
I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?
I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.
I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/
I have to install Docker on windows 7 in a private netwrok with no internet access.
I can download anything and bring it in by usb from another computer.
How do I intall and use docker?
Meaning: From installation, (what to install and how to setup) to creating the first image.
Most of the instruction I found use proxy and I cant use a proxy.
The installation itself involve copying docker-machine-Windows-x86_64.exe, renaming it to docker-machine.exe, and creating a virtualbox machine with it.
The issue is that it will attempt to download boot2docker.iso (the tinyCore-based linux image which includes docker pre-installed)
That means you need to copy that file on your usb key first, from boot2docker/boot2docker/releases.
From issue 539:
docker-machine create mydocker --virtualbox-boot2docker-url=file:///Users/auser/Downloads/boot2docker.iso --driver=virtualbox
You will need a similar docker setup on a machine with internet access in order to:
docker pull the images you want
docker save them
copy them on the USB key
copy them on your offline server, in C:\Users... (which is the only folders mounted in boot2docker VM)
Then you need to open an ssh session
docker-machine ssh default
And within that session, you can access the folder where the saved images are copied, and docker load them.