Enable docker registry mirror ( nexus ) on EKS nodes - docker

Im trying to configue eks node pools to spinup nodes that will be configured to use docker registry mirror (sonatype nexus cache repo in my case) ,
I know i have 2 options, 1 to use custom ami ( not in favor of that option of course) and 2 to configure user—data for the launch template to change the dockerd configuration and restart it.
I tried that by doing the following :
#!/bin/bash cat <<EOF >/etc/docker/daemon.json {"registry-mirrors": "http://nexus.xxxxxxx.com:9091”} EOF systemctl restart docker --no-block
That didn't work,
If anyone have a good advice on nice example for the user data for eks

Related

Installing and Running docker in a Docker container running in Openshift

I am currently working on the following scenario
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I have been working on it on quite some time now. Checked dozens of posts and threads online but I have not been able to accomplish it. Basically I got so far
I can install docker in my container (from the baseimage openshift/jenkins-2-centos7:latest)
I can't get docker to run as this makes use of systemctl which
Now I read that systemctl is not working inside docker containers or at least highly unrecommended as it interferes with the PID 1 in the system. Without
systemctl start docker
that will leave me with docker beeing unable to connect with the daemon (as expected) and the error message
Can't connect to docker daemon. Is 'docker -d' running on this host?
So I tried to set up the daemon myself using
the follwoing in my Dockerfile
RUN usermod -aG docker $(whoami)
RUN dockerd -H unix:///var/run/docker.sock
which will also not work telling me that cgroups cannot be mounted. After some more research I found that this could be handled with the cgroupfs-mount script from
https://github.com/tianon/cgroupfs-mount/tree/master
But also here I got no luck leaving me with the following error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.4.21: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
Now after hours I am out of ideas. Does anyone have an idea how to make docker work inside of OpenShift? Would be really greatful
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I don't think your conclusion here is the only possibility, and what I'll describe below is an easier approach to get what (I think) you want! :) If there are any other use cases that you have than these 3 I'll describe, let me know and I'll try to update to cover them:
Pipelines running in their own containers
Running additional containers from Pipelines
Building container images from Pipelines
Pipelines running in their own containers
For this case, there's the excellent Kubernetes plugin.
With this plugin, you add a Kubernetes/OpenShift cloud to the Jenkins global config. This can either be the one in which Jenkins is running (if you use the Jenkins image provided by OpenShift, this gets added by default at least), or an external cluster.
Inside that configuration, you can define PodTemplates (again, there are a couple of examples provided in the Jenkins image provided by OpenShift), or you can specify that in your pipeline directly also I think. When your pipeline requests a node/agent with a label that matches one of these (and there are no long-running agents that match), then a pod will be created from that template, and your pipeline execution will happen inside a container in that. Once it's no longer needed, it will be deprovisioned again.
Here are the pipeline steps exposed by this plugin: https://jenkins.io/doc/pipeline/steps/kubernetes/
Running additional containers from Pipelines
As part of your pipeline, you may want to run some tests, and those may expect to be able to interact with e.g. a database. You can create resources for that in your OpenShift project (e.g. a Deployment & expose it with a Service), and tear them down after. The openshift-client plugin is very useful here and has docs on how to interact with OpenShift.
Building container images from Pipelines
If your goal is to build container images from pipelines, remember that OpenShift also exposes this capability (depending on the security configuration) through Builds. Just like in the previous section, you can use the openshift-client plugin to create and trigger builds.
For more information on the Jenkins image that's maintained by OpenShift (and generally how to do useful things in Jenkins on OpenShift), there's this dedicated page in the OpenShift docs.
You have this article by #jpetazzo, from Docker team, about Docker In Docker (DinD):
article:
The primary purpose of Docker-in-Docker was to help with the development of Docker itself. Many people use it to run CI (e.g. with Jenkins), which seems fine at first, but they run into many “interesting” problems that can be avoided by bind-mounting the Docker socket into your Jenkins container instead.
DinD Repo:
This work is now obsolete, thanks to the combined efforts of some amazing people like #jfrazelle and #tianon, who also are black belts in the art of putting IKEA furniture together.
If you want to run Docker-in-Docker today, all you need to do is:
docker run --privileged -d docker:dind
So here is an article using another approach to build docker containers with Jenkins inside a docker container:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
So you may want to adapt one of this solutions to your OpenShift scenario. I hope it solves your issue.
You'll need a privileged pod running jenkins wich mounts the openshift node docker socket. This is a bad idea as jenkins'll launch container outside kubernetes semantics and control.
Why do not use s2i service shipped with openshift ?
Regards.

Spinning Docker / ECS containers from Jenkins Docker container

I had setup Jenkins using the Jenkins Docker Image on an AWS ECS Cluster with just one EC2 instance.
After the initial setup, I tried running the hello-world pipeline from Jenkins documentation. I see that I am getting "docker: not found"
I understand that this is because Docker is not installed and available within the Jenkins Docker container. However, I have a fundamental question on whether I should proceed with installing Docker inside the running Jenkins Docker container (to use that as the base image) or not. When I researched around, I found this blog post and this SO Answer.
I wanted to follow these suggestions and I tried mounting the volume /usr/bin/docker and the socket /var/run/docker.sock from the host EC2 / ECS instance to the Jenkins Container. After this, when I ran the docker version command to test the setup, I am getting linux library issues - docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory which indicates that the setup did not go well.
Here are my questions -
How to run Jenkins pipelines that use Docker containers when running Jenkins based on a Docker container? I want to be able to pull / build / run docker containers, say for example - run the hello-world pipeline example referenced above?
My end goal is to create 2 types of Jenkins jobs that do the following -
Jenkins Job Type 1
Check out repository from BitBucket cloud
Run a shell script to build a docker image for a java project (possibly using the maven jib plugin)
Publish to AWS ECR. (assuming this can be done using the cloudbees plugin)
Jenkins Job Type 2
Pull the image published from Job Type 1 from AWS ECR
Create a container from the image (which essentially runs the java application)
The container itself could be run on the same Jenkins ECR cluster with slaves. But, again should the slaves have docker installed within them to pull and run the image from ECR?
Asking these questions after a good amount of research and not finding answers. Any guidance is appreciated. Thanks.
I Googled the docker error you included in your post and found this StackOverflow post.
You have to install libltdl-dev in order to get everything working correctly
Since the errors are identical I suggest you give it a shot. As per the post, install libltdl-dev in the docker container.

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Easiest way to do docker build command within Jenkinsfile running on Jenkins slave node?

Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M

MESOS / MARATHON / DOCKER - Docker started is wrong & Port Forwarding

I'm a bit new to Mesos / Marathon and I try to integrate it with my Docker Images.
So far : Mesos 0.21 for slave & master / Marathon 0.7.5 and of course, Zookeeper.
I succeed on adding with curl my docker images but, unfortunately, I have 2 main issues:
Even if I have build my image locally (in that case a tomcat7 Docker image) and see the Marathon config that it is well taken into account, the docker image started is not the one expected, it is always a ubuntu:latest image.
How to manage docker port forwarding ? Are we forced to use a solution such as HAProxy ? I see that My Mesos slave uses always the same range of Port (31000 - 32000) for started containers.
Thank you everyone for support.
Here is an anwser from ConnerDoyle found on mIRC #mesos :
ConnorDoyle: Mesos comes with a Docker containerizer that always pulls from a docker registry.
You can configure the registry that dockerd pulls from in the usual way (via a .dockercfg file)
* Retrieving #mesos modes...
Alex: So even if for isntance eerything is in local
ConnorDoyle: Yeah. You can use any image up on Dockerhub (the default registry for dockerd) or you can set your own.
AlexFR: I shall define a private registry ?
AlexFR: or push it to Dockerhub
ConnorDoyle: Yes, because it assumes you're on a big cluster and you want to fetch the image from somewhere when the job starts :)
ConnorDoyle: Yeah, pushing to dockerhub is easier probably.
This answers the first Question.
Concerning the second , seems that HAProxy is the "standard approach" https://mesosphere.github.io/marathon/docs/service-discovery-load-balancing.html

Resources