Is this possible to set up a config, which would include:
GitLab project #1 java-container
GitLab project #2 java-container
Nginx container
Redis container
Cassandra container
Nginx exporter (Prometheus)
Redis exporter (Prometheus)
JMX exporter (Prometheus) x2
It's important to have all this in one multi-container pod on kubernetes (GKE) and communicating via shared volume and localhost.
I've already done all this in kubernetes with initial containers (to pull the code and compile it), and now I'm looking for the way to make this work with CI/CD.
So, if this could be done with GitLab CI, could you, please, point me to the right documentation or manual pages, as I'm a newbie in GitLab CI and stuff, and have already lost myself in dozens of articles from the internet.
Thanks in advance.
The first thing is to join all projects, that should be built with maven and(or) docker into the one common project at GitLab.
Next is to add dockerfiles and all files, needed for docker build, into the sub-projects folders.
Next in the root of the common project we should we should place .gitlab-ci.yml and deployment.yml file.
deployment.yml should be common or all the sub-project.
.gitlab-ci.yml should contain all the stages to build every sub-project. As we don't need to build all the stuff every time we make changes in sime files, we should use tags in git to make GitLab CI understand, in which case it should run one or another stage. This could be implemented by only parameter:
docker-build-akka:
stage: package
only:
- /^akka-.*$/
script:
- export DOCKER_HOST="tcp://localhost:2375"
...
And so on in every stage. So, if you make changes to needed Dockerfile or java code, you should commit and push to gitlab with a tag like akka-0.1.4, and GitLab CI runner will run only the appropriate stages.
Also, if you make changes to README.md file or any other changes, that don't need to build the project - it wouldn't.
Lots of useful stuff you can find here and here.
Also, look at the problem, that I faced running docker build stage in kubernetes. It might me helpful.
Related
I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.
I use CircleCI to build a go binary that I want to run in a pod installed by Helm charts. I want to move the binary from CircleCI to the remote cluster so it's available when the pod starts. I know it's possible with volumes, like ConfigMap or Secrets but I'm not sure what the best way to do this.
I once made it work with a private docker registry and a kubernetes Secrets for the credentials of the registry but I don't like this option. I don't want to have to build and push a new docker image on every binary change.
version: 2.1
jobs:
build_and_deploy:
docker:
- image: circleci/golang:1.12.7
steps:
- checkout
- run: go get -v -t -d ./...
- run: go build cmd/main.go
- run: ...
- run: helm install
workflows:
version: 2
build:
jobs:
- build_and_deploy:
The expected result should be a new binary available on the cluster every time the job runs.
According to the best practices - the binary file should be applied during your build image execution - as mentioned by community above and best developer practices:
Don’t create images from running containers – In other terms, don’t use “docker commit” to create an image. This method to create an image is not reproducible and should be completely avoided. Always use a Dockerfile or any other S2I (source-to-image) approach that is totally reproducible, and you can track changes to the Dockerfile if you store it in a source control repository (git).
However, from another point of view you can consider:
1. init contianers to build your image directly on the cluster
2. kaniko with with external location of your build context (gcs bucket git repository)
3. helm pre-install hook in order to use the above mentioned solutions
4. finally other solutions like cloud build or cloud build locally
Please refer also to "Switching from CircleCI to Google Cloud Build".
As described in the article above you can use keel to automatically update your deployments when the image in the docker repository is updated.
Please let me know if it helps.
Your CI/CD should simply build a Docker container with that binary. Then you should push it to the private repository. Cluster should download the binary.
I want to build and test multiple dockers from one repo in GitLab. This monorepo holds a few microservices that work together.
We have have a working docker compose up to aid local dev, so thats a start.
Goal is move build + test to GitLab, run a E2E test (end-to-end) of these dockers, and let GitLab upgrade our staging environment.
The building of multiple dockers is just multiple jobs in build stage of the (one) pipeline, testing can happen per docker i guess, which leaves testing the whole with multiple dockers running in a (or the?) staging environment.
How can GitLab run E2E tests on multiple dockers? (or is this unwise to begin with)
Would we need Kubernetes for the inter-docker mapping (network, volumes, but also dependencies) that docker-compose now facilitates?
We are using a self-hosted GitLab CE instance.
Update: cut shorter, use proper terminology.
I haven't worked on PHP, and never done Docker build for multi module, although I tried out a quick example of multi module kind of thing for Nodejs. Check this Repo
Refer to .gitlab-ci.yml which builds two independent hello world kind of nodejs module.
I now adapted this to build docker images separately.
Original answer (question changed 12th July)
You haven't mentioned what documentation you are referring to. You should be able to configure using .gitlab-ci.yml
AFAIK Building docker image should be programming language agnostic. If you can run docker build . on your local. Following documentation should help
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
You can use Gitlab pipeline for testing < - > building images. Refer to https://docs.gitlab.com/ee/ci/pipelines.html
Refer to this for PHP build - https://docs.gitlab.com/ee/ci/examples/php.html
Answering my own question, 2 years of experience later.
For a normal (single-project/service) repo the CI is straightforward.
For a monorepo, an end-to-end test is straightforward.
You can combine these aspects by having a repo with submodules, its essentially both. But this can cause overhead if done continiously and mostly usefull for proofing beta/alpha and release-candidates.
If you know more options, feel free to add here.
I have multiple projects I need to build as part of the same CI flow - some are in java, some are nodejs, some are c++ etc.
We use Jenkins and slaves are supposed to run as docker containers.
My question is - should I create a jenkins slave container image per module type, i.e a dedicated slave image which would be able to build java, and a dedicated container to build nodejs with node installed etc. or a single container which can build anything - jave, node, etc.
If I look at it from vm perspective, I would most likely use the same vm to build anything - which means a centralized build slave. But I don't like this dependency, or if tomorrow I need to update the java version and keep the old one I might create huge images with little differences between them.
WDYT?
I personally would go down the route of a container-per-module-type because of the following:
I like to keep my containers as focussed as possible. They should do one thing and do it well e.g. build Java applications, build Node applications
Docker makes it incredibly easy to build Container images
It is incredibly easy to stop and start Containers
I'd probably create myself a separate project in Git that was structured something like this:
- /slaves
- /slaves/java
- /slaves/java/Dockerfile
- /slaves/node
- /slaves/node/Dockerfile
...
I have one Dockerfile that creates and builds the container image of the slave for the given "module type". I would make changes to this project via pull requests and each time a pull request is merged into master, push the resulting images up to DockerHub as the new version to be used as my Jenkins slaves.
I would have the above handled by another project running in my Jenkins instance that simply monitored my Git repository. When changes are made to the Git repository it just runs the build commands in order and then does a push of the new images to DockerHub:
docker build -f slaves/java/Dockerfile -t my-company/java-slave:$BUILD_NUMBER -t my-company/java-slave:latest
docker build -f slaves/node/Dockerfile -t my-company/node-slave:$BUILD_NUMBER -t my-company/node-slave:latest
docker push my-company/java-slave:$BUILD_NUMBER
docker push my-company/java-slave:latest
docker push my-company/node-slave:$BUILD_NUMBER
docker push my-company/node-slave:latest
You can then update your Jenkins configuration to the new image for the slaves when you're ready.
We want to give it a try to setup CI/CD with Jenkins for our project. The project itself has Elasticsearch and PostgreSQL as runtime dependencies and Webdriver for acceptance testing.
In dev environment, everything is set up within one docker-compose.yml file and we have acceptance.sh script to run acceptance tests.
After digging documentation I found that it's potentially possible to build CI with following steps:
dockerize project
pull project from git repo
somehow pull docker-compose.yml and project Dockerfile - either:
put it in the project repo
put it in separate repo (this is how it's done now)
put somewhere on a server and jut copy it over
execute docker-compose up
project's Dockerfile will have ONBUILT section to run tests. Unit tests are run through mix tests and acceptance through scripts/acceptance.sh. It'll be cool to run them in parallel.
shutdown docker-compose, clean up containers
Because this is my first experience with Jenkins a series of questions arise:
Is this a viable strategy?
How to connect tests output with Jenkins?
How to run and shut down docker-compose?
Do we need/want to write a pipeline for that? Will we need/want pipeline when we will get to the CD on the next stage?
Thanks
Is this a viable strategy?
Yes it is. I think it would be better to include the docker-compose.yml and Dockerfile in the project repo. That way any changes are tied to the version of code that uses the changes. If it's in an external repo it becomes a lot harder to change (unless you pin the git sha somehow , like using a submodule).
project's Dockerfile will have ONBUILD section to run tests
I would avoid this. Just set a different command to run the tests in a container, not at build time.
How to connect tests output with Jenkins?
Jenkins just uses the exit status from the build steps, so as long as the test script exits with a non-zero code on failure and a zero code on success that's all you need. Test output that is printed to stdout/stderr will be visible from jenkins console.
How to run and shut down docker-compose?
I would recommend this to run Compose:
docker-compose pull # if you use images from the hub, pull the latest version
docker-compose up --build -d
In a post-build step to shutdown:
docker-compose down --volumes
Do we need/want to write a pipeline for that?
No, I think just a single job is fine. Get it working with a simple setup first, and then you can figure out what you need to split into different jobs.