Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
As far as I understand, the main difference is that gitlab-ci is opensource (you can install it on your own server) and travis-ci isn't.
So then the latter is always cloud/service-based. And it's free for open source projects.
But then GitLab.com (the company, not the software) also has a cloud version that you don't need to install: ci.gitlab.com. And I'm guessing this version can only be used with public repositories posted in your Gitlab account.
But then, there's almost no documentation out there about running GitLab CI this way. Most of the docs I find are about installing the GitLab CI server or the runners. But how are the ci.gitlab.com's runners configured? What OS do they have? Can I have Windows/Mac runners? (The software supports these OSs apparently, but it's not specified what runners are supplied by ci.gitlab.com's service.)
Edit: 29/06/2016
As comments suggest, now gitlab is offering what they call shared runners. This means that you no longer need to bring your own runner, you can use theirs instead and use it just like travis CI, but there is a limit of 2,000 minutes of CI run-time per month for the free tier.
** Previous historic answer **
Gitlab CI can be used online, but you must bring your own runners. What does this means? You need to install a piece of software in your servers which will run the tests. Its more complex than travis.
After installing you have to associate it with your project, and configure it if you want to run tests inside docker or in your bare hardware. There are few more options.
Each time you push a commit to gitlab, a hook is triggered to gitlab ci and a build is sent to an available runner which executes the build and tests and send back tests results to gitlab ci server.
Now, with the last update, gitlab ci is inside gitlab, but it is still the same.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
I just started to learn CI/CD with Jenkins.
Currently, I am very confused about the process with CI, and I'm not sure my understanding is correct or not.
Below is my understanding:
Coding in my local -> push the changes to GitHub with git -> pull the code from Github and build project with Jenkins
My other question is, is it every time I need to click build now in Jenkins manually, or it will automate build again after I make a change?
For the first question you need to integrate Jenkins with build tools like Ant in the CI/CD pipeline to build the code after pull the code from Github.
For the second question gothrough the link automate the build it may helpful.
Your understanding is correct and also, you can have Unit Testing stage before build project.
To build a project using Jenkins, you need not have to run the Jenkins pipeline manually everytime. You can make use of Webhooks in Github to automatically trigger the Jenkins pipeline on every commit or push or various other scenarios.
Here is the guide to help you understand more:
https://docs.github.com/en/developers/webhooks-and-events/webhooks/about-webhooks
A simple tutorial on webhooks: https://www.blazemeter.com/blog/how-to-integrate-your-github-repository-to-your-jenkins-project
Some other ways:
You can also make use of the options available in Build Trigger sections which is available in your pipeline settings to build the Jenkins pipeline automatically.
Options:
Trigger builds remotely (e.g., from scripts)
Build after other projects are built
Build periodically
Build when a change is pushed to GitLab. GitLab webhook URL: https://<github_url> (Webhook)
GitHub hook trigger for GITScm polling
Poll SCM
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am teaching myself how to use Jenkins and Docker for CI/CD and deployment to digital ocean. I am stuck at some steps and I am especially interested in the best practices of CI/CD.
The process/pipeline I currently have:
Local code with Flask web app and docker-compose.yml (incl. dockerfile)
Push the code to github
CI: Local jenkins (will be transferred to the host later in time) tests code
If tests run successfully, I log to droplet, clone the repo, stop running docker container, do docker-compose up
My app is live again
I would like to automate step 4, and potentially I have two plans how to do it (advice on best practice is appreciated!).
Plan 1:
1. Write a step in Jenkins pipeline that will
4.1. start a new droplet automatically
4.2 log in into it with ssh
4.3 pull code from github
4.4 start with docker-compose
4.5 reroute with IP floating to a new droplet
Plan 2:
2. Write a step in Jenkins pipeline that will
4.1 build code and push somehow an image to "somewhere"
4.2 start a new droplet
4.3 log into droplet with ssh
4.4 pull image from "somewhere"
4.5 start with docker-compose
4.6 reroute with IP floating to a new droplet
I'd like to hear your opinions on the steps:
1. Which plan is better?
2. What I could do better?
3. What are the best practices that I could use?
4. Where can I push an image so that I can pull it in a new droplet?
EDIT:
I'd like to hear your answers on following:
1. Which plan is better?
2. Why is Kubernetes better then docker-compose in production env?
I recommend using Kubernetes instead of docker-compose on production environments. If not Kubernetes and you really want Docker only then at least make it Docker Swarm..
docker-compose is not reliable for production because first of all it is only for single node. If you want to scale-up you will surely have downtime because you will be relying on vertical scale (Increasing your node resources).
Kubernetes and Docker Swarm are orchestration tools. Meaning you can add more servers to scale app a.k.a Horizontal Scaling. This orchestration allows your containers to be assigned to other droplets and they can freely communicate to other containers even if they are in different droplets. docker-compose alone can not do that. I will recommend Docker Swarm for beginners as Kubernetes is very complicated
Normally you should just only setup your infrastructure in the cloud then your CI/CD will do the continuous integration on Jenkins doing continuous image builds for at least then doing an automated deployment to your server.
What I am talking about here is.. When you merge your code in a particular branch(e.g. master) of your source code repository like Github or Bitbucket then an automatic Jenkins build will run then execute your CI/CD.. So basically everytime master has an update then it will also update your image inside droplet thus having the latest source code
In your case where you are using Digital Ocean.. You can create an API on your droplet for accepting webhooks to trigger the automated deployment
This is the approach that I can think of when using Digital Ocean. DIgital Ocean is very cheap but things are done manually unlike if you tried GCP and AWS. In GCP and AWS there are more approach to do deployment automation than create your own webhook API. Regarding your last statement "if I can use Jenkins to clone the code, and run the container in the new droplet and reroute with IP floating" but I think it is too much and this is slow. This may take maybe 10 minutes whole on this process alone? We do deployment automation w/ our Kubernetes for maybe 30 seconds only. Our whole CI/CD only takes 2 minutes
On your fourth question.. Dockerhub should be fine for your image repository
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have an assignment to continually integrate, deliver and deploy a Springboot application with Angular using: Gitlab CI, Docker, Kubernetes, Jenkins and SonarQube. My assignment name was as the question is titled with using the technologies described. Any help would be much appreciated. I've already searched the web and learned about these technologies. My question is: How and where to start, which steps should i define so I could complete my assignment? Any help would be much appreciated
Make a repo in gitlab with branches test and prod
Setup docker image build pipeline ( for both the branches ) that will build/test the code and package it in docker image using multi state build ( gitlab CI)
Configure a webhook that triggers a deployment to test environment ( either in jenkins or gitlabci)
Configure a downstream job that can be run manally to deploy to production ( in jenkins or gitlab CI)
In both the deploymenet steps above you will need the declatrivate deployment manifests for kubernetes
The above are just basics , there are many other tools that can be used for kubernetes deployments.
The usual approch is to commit code to testing/dev and then build/test the docker image and triger test deployment as soon as the image is arrived in registry. if everything goes well , then you just port the change to prod branch that will trigger the pipeline again for building/testing the prod image followed by deployment.
I want to build and test multiple dockers from one repo in GitLab. This monorepo holds a few microservices that work together.
We have have a working docker compose up to aid local dev, so thats a start.
Goal is move build + test to GitLab, run a E2E test (end-to-end) of these dockers, and let GitLab upgrade our staging environment.
The building of multiple dockers is just multiple jobs in build stage of the (one) pipeline, testing can happen per docker i guess, which leaves testing the whole with multiple dockers running in a (or the?) staging environment.
How can GitLab run E2E tests on multiple dockers? (or is this unwise to begin with)
Would we need Kubernetes for the inter-docker mapping (network, volumes, but also dependencies) that docker-compose now facilitates?
We are using a self-hosted GitLab CE instance.
Update: cut shorter, use proper terminology.
I haven't worked on PHP, and never done Docker build for multi module, although I tried out a quick example of multi module kind of thing for Nodejs. Check this Repo
Refer to .gitlab-ci.yml which builds two independent hello world kind of nodejs module.
I now adapted this to build docker images separately.
Original answer (question changed 12th July)
You haven't mentioned what documentation you are referring to. You should be able to configure using .gitlab-ci.yml
AFAIK Building docker image should be programming language agnostic. If you can run docker build . on your local. Following documentation should help
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
You can use Gitlab pipeline for testing < - > building images. Refer to https://docs.gitlab.com/ee/ci/pipelines.html
Refer to this for PHP build - https://docs.gitlab.com/ee/ci/examples/php.html
Answering my own question, 2 years of experience later.
For a normal (single-project/service) repo the CI is straightforward.
For a monorepo, an end-to-end test is straightforward.
You can combine these aspects by having a repo with submodules, its essentially both. But this can cause overhead if done continiously and mostly usefull for proofing beta/alpha and release-candidates.
If you know more options, feel free to add here.
I'm a bit newbie on Jenkins so I have this question.
I am currently working on a project about CD. We are using jenkins to build a docker image, push it to the registry and deploy into OpenShift afterwards...Although this process works like a charm there is a tricky problem i'd like to solve. There is not only 1 openshift but 3 (and increasing) environments/regions where I want to deploy this images.
This is how we are currently doing:
Setting region tokens as secret text
$region_1 token1
$region_2 token2
$region_3 token3
Then
build $docker_image
push $docker_image to registry
deploy into Region1.ip.to.openshift:port -token $region_1
deploy into Region2.ip.to.openshift:port -token $region_2
deploy into Region3.ip.to.openshift:port -token $region_3
Thus, in case we need to new any new "region" to the Jenkins Jobs, we have to edit every job manually...
Since the number of docker images and also the number of Openshift regions/enviromnets is increasing, we are looking for the way to kind of "automate" or make it easier as possible when it comes to add a new Openshift region, since ALL the jobs (old and new ones) must deploy their images into those new environment/regions...
I have been reading documentation for a while but Jenkins is so powerful and have so many features/options that somehow i get lost reading all the docs...
I dont know if doing a Pipeline process or similar would help...
Any help is welcome :)