I'm little confused about Jenkins and was hoping someone could clarify some matter for me.
After reading up on Jenkins, both from official docs and various tutorials I get this:
If I wanna set up auto deplyoment or anything Jenkins related, I could just install docker jenkins image, launch it and access it via localhost. That is clear to me.
Then, I just put Jenkinsfile into my repository, so that it knows what and how to build my repo and stuff.
The questions that I have are:
It seems to me that Jenkins needs to be running all the time, so that it can watch for all the repo changes and trigger code building, testing and deploying. If that is the case, I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
I'll try to deploy my backend and front apps on docker-compose file on my server. I'm not sure where does Jenkins integrates in all that.
How Jenkins can watch for all the repository changes and trigger code building, testing and deploying?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
Jenkins and other automation servers offer two options to watch source code changes:
Poll SCM: Download and compare source code at predefined intervals.This is simple but, hardware consumption is elevated and is a little outdated
Webhooks: Optimal functionality offered by github, bitbucket, gitlab, etc. in which Github, for example, at any git event, makes an http request to your automation server, sending all the information like branch name, commit author, etc). Here more info about webhooks and jenkins.
If you don't want a 24/7 dedicated server, you can use:
Some serverless platform or just a simple application able to receive http posts + webhook strategy. For instance, Github will perform a post requet to your app/servlerless and at this point, just execute your build, test or any other commands to deploy your application.
https://buddy.works/. It is like a mini-jenkins.
If I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
Yes. Jenkins is an automation server, so it needs its own dedicated server.
You can install jenkins manually or use docker in your droplet. Configure 8080 port for your jenkins. If everyting is ok, just access to your droplet public ip offered by digitalocean, like: http://197.154.458.456:8080. This url must load the Jenkins dashboard.
Related
I need some guidance and advice please on if / how I can implement a CI / CD pipeline for a corporate PHP Laravel application that is hosted on an internal server with limited access and no public IP. Unfortunately my DevOps knowledge is limited and despite a lot of Google searches, I have no idea if I am on the right track or if I am a million miles away. Everything that I have read so far looks at using Web Hooks which as far as I can tell are reliant on a public IP / domain.
At present I can work on the application on my laptop and push changes to BitBucket Cloud. While I have managed to create a bitbucket-pipelines.yml file that will automatically build and test any branches that are pushed, I then have to access the server to pull the code manually and run the various scripts as required, which I would much prefer to automate.
I need to understand please what options there are for implementing continuous deployment given the circumstances and limitations?
If anyone could offer some pointers then I would be very grateful.
Thanks
You can't push to a machine that is unreachable but there are some alternatives.
Configure a bastion host, so an external CI/CD runner can ssh through it into your server. Allow inbound port 22 connections from your CI/CD provider CIDR blocks to your bastion host https://ip-ranges.atlassian.com/
Setup a CI/CD self-hosted runner in the same network than the server. https://support.atlassian.com/bitbucket-cloud/docs/runners/ Use that runner for the deployment step to ssh into your server private IP address.
Setup a pull-based deployment strategy. Your deployment step only registers the new wanted version, e.g. moving a release/production branch on the commit ref where the deploy was triggered. In your server, setup some kind of subscription to the latest release/deployment, e.g. a cron task to frequently fetch the production git branch. Upon changes, restart services and run any task you need. ansible-pull can be handy for this purpose.
I want to know how to configure Jenkins with my live preprod instance server ?
Let me to explain you my process and tell me if I'm right and if that's not the good way to do.
1) I have my project project-1 in a server: /var/www/preprod/project-1, this project is in Magento Cms so it contains many files.
2) I copied this project project-1 in a repo Git, repo = project-1.
3) I cloned this project from this repo Git to my local machine: MAMP/htdocs/project-1.
4) I installed Jenkins, and I configured it with git, So when I do some push, Jenkins do a build automatically.
Now what I want to do is after the build, I want Jenkins to upload these changes to my live preprod server, whether automatically and manually.(I want to know the method to do it manually and automatically).
With this method, I develop in my local server, so when I finish some task and it's done, I push it to Git to have the changes history, and after that my need is to push it to the live server.
So tell me please if I'm using the right method, if it's a good practice and what I miss for this continuous deployment & delivery.
You can push it to the server using the Publish over SSH command if your doing a freestyle job https://wiki.jenkins.io/display/JENKINS/Publish+Over+SSH+Plugin, If you are doing an pipeline then you can do a simple scp command...
you can run this after the build is completed and it will run it automatically for you ...
Manually you will be notified when a build is done then you would copy it into your server using the normal way you would do it i.e.copy and paste...
Jenkins is a automation server. The whole point of using jenkins is to automate things so that you "manual" intervention is not required. So automate it where ever possible.
Hope it helps :)
I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.
I'm new to Jenkins, and I like to know if it is possible to have one Jenkins server to deploy / update code on multiple web servers.
Currently, I have two web servers, which are using python Fabric for deployment.
Any good tutorials, will be greatly welcomed.
One solution could be to declare your web servers as slave nodes.
First thing, give jenkins credentials to your servers (login/password or ssh login+private key or certificate. This can be configured in the "Manage credentials" menu
Then configure the slave nodes. Read the doc
Then, create a multi-configuration job. First you have to install the matrix-project plugin. This will allow you to send the same deployment intructions to both your servers at once
Since you are already using Fabic for deployment, I would suggest installing Fabric on the Jenkins master and have Jenkins kick off the Fabric commands to deploy to the remote servers. You could set up the hostnames or IPs of the remote servers as parameters to the build and just have shell commands that iterate over them and run the Fabric commands. You can take this a step further and have the same job deploy to dev/test/prod just by using a different set of hosts.
I would not make the webservers slave nodes. Reserve slave nodes for build jobs. For example, if you need to build a windows application, you will need a windows Jenkins slave. IF you have a problem with installing Fabric on your Jenkins master, you could create a slave node that is responsible for running Fabric deploys and force anything that runs a fabric command to use that slave. I feel like this is overly complex but if you have a ton of builds on your master, you might want to go this route.
We're currently evaluating CI servers and Travis CI caught our eye since it is a hosted solution. I haven't been able to find any information about it being able to deploy to Scalr though. Has anyone had any luck setting this up? I found information about using Jenkins to deploy to Scalr but I'd rather not go with Jenkins.
Thanks.
Deploying an application upon a Travis CI build success if functionally similar to deploying one upon a Jenkins success. All you need to do is to hook in to Scalr through its API when you build succeeds.
Using Travis CI, you can't really run arbitrary post-build shell scripts (unlike Jenkins). This makes integration a bit more complicated than using Jenkins (with Jenkins you just use the Scalr Command Line Tools to call the Scalr API), but it remains feasible.
All you need to do is have Travis CI send a notification to a Webhook Endpoint to a webapp you control (host that on your cloud infrastructure, or on e.g. Heroku), and have that webapp call the Scalr API.
Disclaimer: I work at Scalr.