I followed this tutorial to enable gitlab as a repository of docker images, then I executed docker push of the image and it is loaded in gitlab correctly.
http://clusterfrak.com/sysops/app_installs/gitlab_container_registry/
If I go to the Project registry option in Gitlab, the image appears there, but the problem occurs when I restart the coupler engine or the container where gitlab is located and when I re-enter the option to register the project in gitlab, all the images they are eliminated
That could be happening.
the problem occurs when I restart the coupler engine or the container where gitlab is located and when I re-enter the option to register the project in gitlab, all the images they are eliminated
That means the path where GitLab is storing docker images is part of the container, and is not persistent, ie is not a volume or a bind mount.
From the tutorial, you have the configuration:
################
# Registry #
################
gitlab_rails['registry_enabled'] = true
gitlab_rails['gitlab_default_projects_features_container_registry'] = false
gitlab_rails['registry_path'] = "/mnt/docker_registry"
gitlab_rails['registry_api_url'] = "https://localhost:5000"
You need to make sure, when starting the GitLab container, that it mounts a volume or host local path (which is persistent) to the container internal path /mnt/docker_registry.
Then, restarting GitLab would allow you to find back all the images you might have stored in the GitLAb-managed Docker registry.
I have created the volume where the images are and now it works.
thank you very much
Related
I have a Docker build environment where I build containers locally and test them. When I'm done, I push them to our Dev GitLab container registry to be deployed to Kubernetes.
I've run into a situation where either Docker isn't pushing up the newest layers or GitLab is seeing layers from a previous version and just mounting that layer so when the container is deployed in Kubernetes, the container, despite the new tag, is running the old container image.
I've tried completely wiping my Docker image repository, rebuilding, and repushing and that didn't fix it. I tried using the red trash icon in GitLab to delete the old version of the tag I'm trying to use.
I added some echo's in the console output for the container so I know the new bits aren't being run but I can't figure out if the problem is Docker or GitLab or how to fix it. Anyone have any ideas?
TIA!
Disregard -- my workers had a docker image on them and because my imagePullPolicy in my YAML was not set, it was defaulting to using the cached image in Docker. I just had to clear the image on my worker node (docker rmi -f <image name>) and then I updated my deployment YAML to use an imagePullPolicy: Always.
Docker-registry image is started in local machine. Images that are built are able to be pushed to docker-registry. However, when we do the restart of the registry, the pushed images are lost and not retained.
Started the container with --always tag.
No Code
Expected result would be to retain the images in the registry even after a restart of the docker-registry.
Pick a directory on the host file system to store the images during registry downtime, and bindmount that into the registry upon restart.
So in your docker run for the registry, you need this:
-v /registry-storage:/var/lib/registry
you can name the lefthand directory anything you like. You must call the righthand side of the colon /var/lib/registry
I want to create nexus 3 docker with pre-define configuration (few repos and dummy artifacts) for testing my library.
I can't call the nexus API from the docker file, because it require running nexus.
I tried to up the nexus 3 container, config it manually and create image from container
docker commit ...
the new image created, but when I start the new container from it, it doesn't contains all my manual configuration that I did before.
How can I customize the nexus 3 image?
If I understand well, you are trying to create a portable, standalone customized nexus3 installation in a self-contained docker image for testing/distribution purpose.
Doing this by extending the official nexus3 docker image will not work. Have a look at their Dockerfile: it defines a volume for /nexus_data and there is currently no way of removing this from a child image.
It means that when your start a container without any specific options, a volume is created for each new container. This is why your committed image starts with blank data. The best you can do is to name the data volume when you start the container (option -v nexus_data:/nexus_data for docker run) so that the same volume is being reused. But the data will still be in your local docker installation, not in the image.
To do what you wish, you need to recreate you own docker image without a data volume. You can do it from the above official Dockerfile, just remove the volume line. Then you can customize and commit your container to an image which will contain the data.
I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?
JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.