A jenkins-slave container image per module type or a single centralized one which can build anything? - docker

I have multiple projects I need to build as part of the same CI flow - some are in java, some are nodejs, some are c++ etc.
We use Jenkins and slaves are supposed to run as docker containers.
My question is - should I create a jenkins slave container image per module type, i.e a dedicated slave image which would be able to build java, and a dedicated container to build nodejs with node installed etc. or a single container which can build anything - jave, node, etc.
If I look at it from vm perspective, I would most likely use the same vm to build anything - which means a centralized build slave. But I don't like this dependency, or if tomorrow I need to update the java version and keep the old one I might create huge images with little differences between them.
WDYT?

I personally would go down the route of a container-per-module-type because of the following:
I like to keep my containers as focussed as possible. They should do one thing and do it well e.g. build Java applications, build Node applications
Docker makes it incredibly easy to build Container images
It is incredibly easy to stop and start Containers
I'd probably create myself a separate project in Git that was structured something like this:
- /slaves
- /slaves/java
- /slaves/java/Dockerfile
- /slaves/node
- /slaves/node/Dockerfile
...
I have one Dockerfile that creates and builds the container image of the slave for the given "module type". I would make changes to this project via pull requests and each time a pull request is merged into master, push the resulting images up to DockerHub as the new version to be used as my Jenkins slaves.
I would have the above handled by another project running in my Jenkins instance that simply monitored my Git repository. When changes are made to the Git repository it just runs the build commands in order and then does a push of the new images to DockerHub:
docker build -f slaves/java/Dockerfile -t my-company/java-slave:$BUILD_NUMBER -t my-company/java-slave:latest
docker build -f slaves/node/Dockerfile -t my-company/node-slave:$BUILD_NUMBER -t my-company/node-slave:latest
docker push my-company/java-slave:$BUILD_NUMBER
docker push my-company/java-slave:latest
docker push my-company/node-slave:$BUILD_NUMBER
docker push my-company/node-slave:latest
You can then update your Jenkins configuration to the new image for the slaves when you're ready.

Related

CICD Jenkins with Docker Container, Kubernetes & Gitlab

I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Docker hub/store doesn't show build information

I'm having problems with docker continuous integration.
I setup automated builds in cloud.docker.com for my project, but there is not information at all either in their webs (hub/store) or their api, which shows that my build is not automated.
Docker Cloud looks like this:
But in the registry there is no "builds" section:
I guess that should look like other members projects, something like this:
Also, like I said, using the endpoint: https://registry.hub.docker.com/v2/repositories/{user}/{project}/ shows me "automated build: false"
I just realized that, in some way, there is no link between the Docker Cloud automatic builds and Docker Hub ones.
If you create an automated build in Docker Hub, everything works. I don't understand the logic of this, because if you create a repo either in docker cloud or docker hub, they are syncronized as one, but automated builds created on Docker Cloud don't show correctly in Docker Hub/Store.
Both, the Docker Hub and Docker Store builds will be updated whenever you do a push to your repo or a new build is sent with docker push, but the information about the automatic build only will be showed in Docker Cloud if you did it here.

Can a Dockerfile push itself to a registry?

For the use case where a Dockerfile needs to be built for each platform it's on (a bit niche I know), is there a way possible for it to push itself to the registry, i.e. calling docker push from within the Dockerfile?
Currently, this is done:
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
Could the login and push steps be directly or cleverly built into the Dockerfile being built or with a combination of others?
Note: This would operate in a secure environment of trustworthy users (so all users being able to push to the registry is fine).
Note: This is an irregular use of Docker, not a good idea for building/packaging software in general, rather I am using Docker to share environments between developers.
I am wondering why can't you have a wrapper script file (say shell or bat) around the "Dockerfile" to do these steps
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
What is it so specific about "Dockerfile". I know, this is not addressing the question that you asked, I might have totally misunderstood your usecase, but I am just curious.
As others pointed out, this can be easily achieved using a CD systems like Drone.io/Travis/Jenkins etc.
At first this sounds to me like the decently-circulated "Nasa's Space pen Myth". But as I said earlier, you may have a proper valid use case which I am not aware of yet.
Docker build creates image using recipe provided in Dockerfile. Each line in Dockerfile creates new temporary image of file system with different checksum. Image after execution of last line of Dockerfile is your final image of build process and is tagged with provided name.
So it is not possible to put docker push command inside Dockerfile because image creation is not finished yet.
Having a Dockefile push it's own image will never work.
To explain a bit more:
What happens when you build and image: Docker will spawn a container and do everything the Dockerfile specifies. You can even see this when running docker ps during the build. If the exit status of the container is 0 (no errors), an image is created from the container.
We don't really have much control over this process other than the build parameters. It's definitely a chicken and egg problem.
Build systems should to this stuff
It's even fairly easy to do this in Jenkins. The Jenkins setup I have uses a docker plugin and executes build commands to a remote docker hosts.. so the Jenkins nodes only pull the repo, runs a build, then pushes the image to a private repo properly tagged (then deletes the local image). You can run unit tests in docker also by making a separate Dockerfile (gets a bit more complicated when you need external mock services)
Builds per branch/architecture is not too hard to set up. With remote hosts doing the build work we can boost up the job limit in Jenkins fairly high and it can run on cheap hardware.
You can also run Jenkins in docker and make it build images in the docker engine it runs in. I just do that through TLS or the old trick of mapping the socket file into the Jenkins container might still work.
I think I started with the CloudBees Docker Build and Publish plugin and it was fairly easy to use, but now I use a custom plugin so I have no idea of the alternatives.

Run Jenkins master and slave with Docker

I want to setup Jenkins master on server A and slave on server B with use of Docker.
Both servers are virtual machines dedicated for Jenkins.
Currently I have started Docker container on server A for master, based on the official Jenkins docker image. But what docker image should I use for Jenkins slave?
That actually depends on the environment and tools you need in your build environment. For example, if you build a C project, you would need an image containing a C compiler and possibly make if you use Makefiles. If you build a Java project, you would need a JDK with a Java compiler and possibly Ant / Maven / Gradle if you use them as part of your build.
You can use the evarga/jenkins-slave as a good starting point for your build slave.
This image already contains JDK. If you simply need JDK and Maven on your build slave, you can build your Docker image with the following Dockerfile:
FROM evarga/jenkins-slave
run apt-get install maven
Using Docker images for build slaves is actually a good idea. Some of the reasons appear at Templating Jenkins Build Environments with Docker Containers:
Docker has established itself as a popular and convenient way to
bootstrap isolated and reproducible environments, which enables Docker
containers to be the most maintainable slave environments. Docker
containers’ tooling and other configurations can be version controlled
in an environment definition called a Dockerfile, and Dockerfiles
allows multiple identical containers can be created quickly using this
definition or for more customized off-shoots to be created by using
that Dockerfile’s image as a base.
I suggest you take trying to use dynamic|ephemeral docker nodes, instead of manually creating nodes and connecting to them via ssh. Take a look at https://engineering.riotgames.com/news/putting-jenkins-docker-container, it's very powerful and I think it's one of killer usecases for Docker.

Resources