Gitlab CI using Docker - how to rename the build container? - docker

We have set up our CI environment using Gitlab CI using Docker images as described here.
Using the default settings the build containers get rather generic names like runner-oh2M8zk--project-206-concurrent-0-build-4.
The machine, that hosts the Gitlab runner, also runs several other containers. To be able to monitor activity on that machine, we want to be able to identify which projects (and maybe commits or branches) triggered a certain build container. With the current naming this is next to impossible.
Afaik, the documentation does not give any hint on how to specify the name for a container run by Gitlab runner. Is there any option (presumably in .gitlab-ci.yml) to set that name or at least specify a prefix/suffix for it?

Related

slave container provisioning on kubernetes

i am using kubernetes plugin in jenkins.i want to run an init script before slave container provision on kubernetes.the kubernetes plugin pod template won't allow me to or i cannot find a way to run it.can anybody please help me with this.i need to run a certain set of commands on the kubernetes slave container before it provisions.this is how my config looks like
1 - You don't need to use the slave image, jenkins will have an agent container regardless, you should only specify containers you want it to run additionally.
2 - On the containers you can specify the entrypoints or pre-provision whatever you want beforehand and just worry about the execution. That means you can get a container ready to go and assume the code will be there, if you need to run any extra commands on the code, you can just add an extra script step
3 - In order for your step to be executed in a container, you have to be explicit in your pipeline, otherwise it will run on the master.
I can't really guide you to using the UI because I use the Jenkinsfile inside projects I want to build.

Using jenkins and docker to deploy to server

Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website

Is there a way for a docker pipeline file to determine the image of the child node it runs on?

I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.

Run Jenkins master and slave with Docker

I want to setup Jenkins master on server A and slave on server B with use of Docker.
Both servers are virtual machines dedicated for Jenkins.
Currently I have started Docker container on server A for master, based on the official Jenkins docker image. But what docker image should I use for Jenkins slave?
That actually depends on the environment and tools you need in your build environment. For example, if you build a C project, you would need an image containing a C compiler and possibly make if you use Makefiles. If you build a Java project, you would need a JDK with a Java compiler and possibly Ant / Maven / Gradle if you use them as part of your build.
You can use the evarga/jenkins-slave as a good starting point for your build slave.
This image already contains JDK. If you simply need JDK and Maven on your build slave, you can build your Docker image with the following Dockerfile:
FROM evarga/jenkins-slave
run apt-get install maven
Using Docker images for build slaves is actually a good idea. Some of the reasons appear at Templating Jenkins Build Environments with Docker Containers:
Docker has established itself as a popular and convenient way to
bootstrap isolated and reproducible environments, which enables Docker
containers to be the most maintainable slave environments. Docker
containers’ tooling and other configurations can be version controlled
in an environment definition called a Dockerfile, and Dockerfiles
allows multiple identical containers can be created quickly using this
definition or for more customized off-shoots to be created by using
that Dockerfile’s image as a base.
I suggest you take trying to use dynamic|ephemeral docker nodes, instead of manually creating nodes and connecting to them via ssh. Take a look at https://engineering.riotgames.com/news/putting-jenkins-docker-container, it's very powerful and I think it's one of killer usecases for Docker.

How would i go about creating docker environment in CI with lots of services

Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.

Resources