Rancher and jenkins configuration? [closed] - jenkins

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Problem context
I have worked on jenkins where in installed jenkins war on linux machine and created jobs to interact with Git and deploy code
Racnher cluster is configured with three nodes rancher,master,worker
Where to start ?
Now to setup jenkins do i need to install jenkins war separatly (java -jar jenkins.war) on same/any other VM and use the jenkins jobs to deploy containers on rancher cluster ??
Is there any easiest way ?
Please help

Finally i got solution.My company has embeded jenkins/git configuration and from jenkins pipeline i am deploying app to kr cluster using intermediate binary storages,company owned dockerhub

There isn't really the easiest way for this, you gonna have to brute force it

Related

Automate Builds on Jenkins [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 days ago.
Improve this question
I tried to follow a YouTube tutorial on how to automate builds on Jenkins as a DevOps tool. I know to automate builds with docker containers and Kubernetes not Jenkins!
I installed Jenkins and installed some plugins; I can create bash scripts for the deployment and host the application on a GitHub. Can you run me through setting up the job, connect to my repository and configure environments?

CI/CD with Jenkins [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
I just started to learn CI/CD with Jenkins.
Currently, I am very confused about the process with CI, and I'm not sure my understanding is correct or not.
Below is my understanding:
Coding in my local -> push the changes to GitHub with git -> pull the code from Github and build project with Jenkins
My other question is, is it every time I need to click build now in Jenkins manually, or it will automate build again after I make a change?
For the first question you need to integrate Jenkins with build tools like Ant in the CI/CD pipeline to build the code after pull the code from Github.
For the second question gothrough the link automate the build it may helpful.
Your understanding is correct and also, you can have Unit Testing stage before build project.
To build a project using Jenkins, you need not have to run the Jenkins pipeline manually everytime. You can make use of Webhooks in Github to automatically trigger the Jenkins pipeline on every commit or push or various other scenarios.
Here is the guide to help you understand more:
https://docs.github.com/en/developers/webhooks-and-events/webhooks/about-webhooks
A simple tutorial on webhooks: https://www.blazemeter.com/blog/how-to-integrate-your-github-repository-to-your-jenkins-project
Some other ways:
You can also make use of the options available in Build Trigger sections which is available in your pipeline settings to build the Jenkins pipeline automatically.
Options:
Trigger builds remotely (e.g., from scripts)
Build after other projects are built
Build periodically
Build when a change is pushed to GitLab. GitLab webhook URL: https://<github_url> (Webhook)
GitHub hook trigger for GITScm polling
Poll SCM

Best practices: How to use CI/CD to deploy flask webapp to digital ocean? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am teaching myself how to use Jenkins and Docker for CI/CD and deployment to digital ocean. I am stuck at some steps and I am especially interested in the best practices of CI/CD.
The process/pipeline I currently have:
Local code with Flask web app and docker-compose.yml (incl. dockerfile)
Push the code to github
CI: Local jenkins (will be transferred to the host later in time) tests code
If tests run successfully, I log to droplet, clone the repo, stop running docker container, do docker-compose up
My app is live again
I would like to automate step 4, and potentially I have two plans how to do it (advice on best practice is appreciated!).
Plan 1:
1. Write a step in Jenkins pipeline that will
4.1. start a new droplet automatically
4.2 log in into it with ssh
4.3 pull code from github
4.4 start with docker-compose
4.5 reroute with IP floating to a new droplet
Plan 2:
2. Write a step in Jenkins pipeline that will
4.1 build code and push somehow an image to "somewhere"
4.2 start a new droplet
4.3 log into droplet with ssh
4.4 pull image from "somewhere"
4.5 start with docker-compose
4.6 reroute with IP floating to a new droplet
I'd like to hear your opinions on the steps:
1. Which plan is better?
2. What I could do better?
3. What are the best practices that I could use?
4. Where can I push an image so that I can pull it in a new droplet?
EDIT:
I'd like to hear your answers on following:
1. Which plan is better?
2. Why is Kubernetes better then docker-compose in production env?
I recommend using Kubernetes instead of docker-compose on production environments. If not Kubernetes and you really want Docker only then at least make it Docker Swarm..
docker-compose is not reliable for production because first of all it is only for single node. If you want to scale-up you will surely have downtime because you will be relying on vertical scale (Increasing your node resources).
Kubernetes and Docker Swarm are orchestration tools. Meaning you can add more servers to scale app a.k.a Horizontal Scaling. This orchestration allows your containers to be assigned to other droplets and they can freely communicate to other containers even if they are in different droplets. docker-compose alone can not do that. I will recommend Docker Swarm for beginners as Kubernetes is very complicated
Normally you should just only setup your infrastructure in the cloud then your CI/CD will do the continuous integration on Jenkins doing continuous image builds for at least then doing an automated deployment to your server.
What I am talking about here is.. When you merge your code in a particular branch(e.g. master) of your source code repository like Github or Bitbucket then an automatic Jenkins build will run then execute your CI/CD.. So basically everytime master has an update then it will also update your image inside droplet thus having the latest source code
In your case where you are using Digital Ocean.. You can create an API on your droplet for accepting webhooks to trigger the automated deployment
This is the approach that I can think of when using Digital Ocean. DIgital Ocean is very cheap but things are done manually unlike if you tried GCP and AWS. In GCP and AWS there are more approach to do deployment automation than create your own webhook API. Regarding your last statement "if I can use Jenkins to clone the code, and run the container in the new droplet and reroute with IP floating" but I think it is too much and this is slow. This may take maybe 10 minutes whole on this process alone? We do deployment automation w/ our Kubernetes for maybe 30 seconds only. Our whole CI/CD only takes 2 minutes
On your fourth question.. Dockerhub should be fine for your image repository

Continual integration, delivery and deployment of a Springboot Application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have an assignment to continually integrate, deliver and deploy a Springboot application with Angular using: Gitlab CI, Docker, Kubernetes, Jenkins and SonarQube. My assignment name was as the question is titled with using the technologies described. Any help would be much appreciated. I've already searched the web and learned about these technologies. My question is: How and where to start, which steps should i define so I could complete my assignment? Any help would be much appreciated
Make a repo in gitlab with branches test and prod
Setup docker image build pipeline ( for both the branches ) that will build/test the code and package it in docker image using multi state build ( gitlab CI)
Configure a webhook that triggers a deployment to test environment ( either in jenkins or gitlabci)
Configure a downstream job that can be run manally to deploy to production ( in jenkins or gitlab CI)
In both the deploymenet steps above you will need the declatrivate deployment manifests for kubernetes
The above are just basics , there are many other tools that can be used for kubernetes deployments.
The usual approch is to commit code to testing/dev and then build/test the docker image and triger test deployment as soon as the image is arrived in registry. if everything goes well , then you just port the change to prod branch that will trigger the pipeline again for building/testing the prod image followed by deployment.

How do travis-ci and gitlab-ci compare? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
As far as I understand, the main difference is that gitlab-ci is opensource (you can install it on your own server) and travis-ci isn't.
So then the latter is always cloud/service-based. And it's free for open source projects.
But then GitLab.com (the company, not the software) also has a cloud version that you don't need to install: ci.gitlab.com. And I'm guessing this version can only be used with public repositories posted in your Gitlab account.
But then, there's almost no documentation out there about running GitLab CI this way. Most of the docs I find are about installing the GitLab CI server or the runners. But how are the ci.gitlab.com's runners configured? What OS do they have? Can I have Windows/Mac runners? (The software supports these OSs apparently, but it's not specified what runners are supplied by ci.gitlab.com's service.)
Edit: 29/06/2016
As comments suggest, now gitlab is offering what they call shared runners. This means that you no longer need to bring your own runner, you can use theirs instead and use it just like travis CI, but there is a limit of 2,000 minutes of CI run-time per month for the free tier.
** Previous historic answer **
Gitlab CI can be used online, but you must bring your own runners. What does this means? You need to install a piece of software in your servers which will run the tests. Its more complex than travis.
After installing you have to associate it with your project, and configure it if you want to run tests inside docker or in your bare hardware. There are few more options.
Each time you push a commit to gitlab, a hook is triggered to gitlab ci and a build is sent to an available runner which executes the build and tests and send back tests results to gitlab ci server.
Now, with the last update, gitlab ci is inside gitlab, but it is still the same.

Resources