I want to build and test multiple dockers from one repo in GitLab. This monorepo holds a few microservices that work together.
We have have a working docker compose up to aid local dev, so thats a start.
Goal is move build + test to GitLab, run a E2E test (end-to-end) of these dockers, and let GitLab upgrade our staging environment.
The building of multiple dockers is just multiple jobs in build stage of the (one) pipeline, testing can happen per docker i guess, which leaves testing the whole with multiple dockers running in a (or the?) staging environment.
How can GitLab run E2E tests on multiple dockers? (or is this unwise to begin with)
Would we need Kubernetes for the inter-docker mapping (network, volumes, but also dependencies) that docker-compose now facilitates?
We are using a self-hosted GitLab CE instance.
Update: cut shorter, use proper terminology.
I haven't worked on PHP, and never done Docker build for multi module, although I tried out a quick example of multi module kind of thing for Nodejs. Check this Repo
Refer to .gitlab-ci.yml which builds two independent hello world kind of nodejs module.
I now adapted this to build docker images separately.
Original answer (question changed 12th July)
You haven't mentioned what documentation you are referring to. You should be able to configure using .gitlab-ci.yml
AFAIK Building docker image should be programming language agnostic. If you can run docker build . on your local. Following documentation should help
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
You can use Gitlab pipeline for testing < - > building images. Refer to https://docs.gitlab.com/ee/ci/pipelines.html
Refer to this for PHP build - https://docs.gitlab.com/ee/ci/examples/php.html
Answering my own question, 2 years of experience later.
For a normal (single-project/service) repo the CI is straightforward.
For a monorepo, an end-to-end test is straightforward.
You can combine these aspects by having a repo with submodules, its essentially both. But this can cause overhead if done continiously and mostly usefull for proofing beta/alpha and release-candidates.
If you know more options, feel free to add here.
Related
My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.
I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.
I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
Let's start by agreeing that we want to adhere to typical Docker/DevOps principles. Therefore, we want to keep tasks isolated, configurations versions controlled, and overall customization to a minimum.
The Landscape:
Jenkins is being used as the CI/CD tool on your cloud instance of choice.
The Plan:
Create separate instances for test/staging/prod, each with Docker installed
Spin up Jenkins slave containers on each instance, which are controlled by Jenkins master
When a commit is sent to 'test' branch, Jenkins master sends task to 'Test' slave which ultimately spins up version of application
Similarly, after tests are successfully run and code is pushed to staging or prod branches, Jenkins will have branch-respective slave build application.
The Question(s):
What is wrong with this approach?
What can be improved by this approach?
There are a few questions you should ask yourself when taking on this approach, a lot of those are covered in this blogpost.
The final paragraph suggests exposing the docker socket to the CI container, allowing you to build images on the host machine, instead of inside the CI container, saving you from a lot of pains that come from running Docker in Docker.
Other questions you should probably ask are what would be the orchestration service used for controlling the master and slave containers. I had a great time following this blog post by Stelligent to quickly create all I needed on AWS ECS using a Cloudformation stack, but other solutions are obviously an option.
So all in all, I don't see anything wrong with your approach, as long as you exercise caution and follow best practices.
Good luck.
Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.