I have my docker-compose deployment that needs to pull the image from an insecure registry, so I will have to add to update the file /etc/docker/daemon.json and add the following entry to make it work.
{
"insecure-registries": [hostname]
}
Now i want to move this deployment from standalone docker to Kubernetes, How can i update my Kubernetes namespace to use deploy the image from an insecure registry?
If it's for Docker you might need to go to each worker node and edit the Docker daemon, /etc/docker/daemon.json
{
"insecure-registries": [hostname]
}
You can use the K8s Daemon set the restart of the Docker service on each or Node automated way and if scaling up or down occurs Daemon set will perform the changes on new node.
Related
I want to change the default docker registry configuration in nomad. I am setting up a nomad cluster in enterprise VM, which connects to frog artifcatory docker registry. Any docker hub images reference has to go through the internal artifactory registry.
But when I setup nomad and try to install waypoint inside nomad, it looks for busy box and waypoint server and runner images from docker hub.
How can I change the configuration for nomad to go via artifactory to reach docker hub?
It's not possible to set a "default" registry for the Nomad client's Docker driver. The registry would need to be set in the "image" configuration of the Nomad jobspec's "config" stanza. Within that config stanza, or on the Nomad client, you would need to provide an "auth" stanza as well so that Nomad can pull the image from your private registry.
https://www.nomadproject.io/docs/drivers/docker
Regarding Waypoint specifically, for your requirements, I'd recommend installing Waypoint not with the waypoint install command, because there isn't an option to change the Docker repository from which the busy box image is used. Instead, I'd recommend creating a custom Nomad jobspec to deploy Waypoint, and if you intend to use busy box as part of that jobspec, then to specify your image repository in Artifactory that way.
I have asked the same question in nomad forums and got an answer for this. I am posting and adding link to the suggested answer here.
https://discuss.hashicorp.com/t/nomad-network-bridge/37421/2
You can configure Nomad to use an alternate image by configuring the infra_image under the Docker plugin options in Nomad’s agent configuration.
plugin "docker" {
config {
infra_image: "<local mirror>/google_containers/pause-amd64:3.1"
}
}
task "example" {
driver = "docker"
config {
image = "secret/service"
auth {
username = "dockerhub_user"
password = "dockerhub_password"
}
}
}
I have a Docker build environment where I build containers locally and test them. When I'm done, I push them to our Dev GitLab container registry to be deployed to Kubernetes.
I've run into a situation where either Docker isn't pushing up the newest layers or GitLab is seeing layers from a previous version and just mounting that layer so when the container is deployed in Kubernetes, the container, despite the new tag, is running the old container image.
I've tried completely wiping my Docker image repository, rebuilding, and repushing and that didn't fix it. I tried using the red trash icon in GitLab to delete the old version of the tag I'm trying to use.
I added some echo's in the console output for the container so I know the new bits aren't being run but I can't figure out if the problem is Docker or GitLab or how to fix it. Anyone have any ideas?
TIA!
Disregard -- my workers had a docker image on them and because my imagePullPolicy in my YAML was not set, it was defaulting to using the cached image in Docker. I just had to clear the image on my worker node (docker rmi -f <image name>) and then I updated my deployment YAML to use an imagePullPolicy: Always.
How do I configure Docker on my QNAP TS-131P so that it only uploads one layer at time ?
I have a problem pushing an image because it is trying to push multiple layers concurrently and they keep failing because of a poor internet connection.
According to How to push single docker image layers at time? I need to configure daemon to use max-concurrent-uploads but I don't understand how I do this within the context of qnap.
[~] # docker -v
Docker version 17.09.1-ce, build a9fd393
[~] # which docker
/share/CACHEDEV1_DATA/.qpkg/container-station/bin/docker
After much digging,
Looks like container-station is using same location as Linux systems for dockerd config file. Should work by adding a file:
/etc/docker/daemon.json with:
{
"max-concurrent-uploads": 1
}
from how-to-push-single-docker-image-layers-at-time
Alternatively, if the script for starting up docker used by container station (/share/CACHEDEV1_DATA/.qpkg/container-station/script/run-docker.sh) has a line including dockerd, you could add the command line argument --max-concurrent-uploads=1 to that line.
I followed this tutorial to enable gitlab as a repository of docker images, then I executed docker push of the image and it is loaded in gitlab correctly.
http://clusterfrak.com/sysops/app_installs/gitlab_container_registry/
If I go to the Project registry option in Gitlab, the image appears there, but the problem occurs when I restart the coupler engine or the container where gitlab is located and when I re-enter the option to register the project in gitlab, all the images they are eliminated
That could be happening.
the problem occurs when I restart the coupler engine or the container where gitlab is located and when I re-enter the option to register the project in gitlab, all the images they are eliminated
That means the path where GitLab is storing docker images is part of the container, and is not persistent, ie is not a volume or a bind mount.
From the tutorial, you have the configuration:
################
# Registry #
################
gitlab_rails['registry_enabled'] = true
gitlab_rails['gitlab_default_projects_features_container_registry'] = false
gitlab_rails['registry_path'] = "/mnt/docker_registry"
gitlab_rails['registry_api_url'] = "https://localhost:5000"
You need to make sure, when starting the GitLab container, that it mounts a volume or host local path (which is persistent) to the container internal path /mnt/docker_registry.
Then, restarting GitLab would allow you to find back all the images you might have stored in the GitLAb-managed Docker registry.
I have created the volume where the images are and now it works.
thank you very much
I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?
JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.