How to enable Continuous Deployment to an internal server from BitBucket - bitbucket

I need some guidance and advice please on if / how I can implement a CI / CD pipeline for a corporate PHP Laravel application that is hosted on an internal server with limited access and no public IP. Unfortunately my DevOps knowledge is limited and despite a lot of Google searches, I have no idea if I am on the right track or if I am a million miles away. Everything that I have read so far looks at using Web Hooks which as far as I can tell are reliant on a public IP / domain.
At present I can work on the application on my laptop and push changes to BitBucket Cloud. While I have managed to create a bitbucket-pipelines.yml file that will automatically build and test any branches that are pushed, I then have to access the server to pull the code manually and run the various scripts as required, which I would much prefer to automate.
I need to understand please what options there are for implementing continuous deployment given the circumstances and limitations?
If anyone could offer some pointers then I would be very grateful.
Thanks

You can't push to a machine that is unreachable but there are some alternatives.
Configure a bastion host, so an external CI/CD runner can ssh through it into your server. Allow inbound port 22 connections from your CI/CD provider CIDR blocks to your bastion host https://ip-ranges.atlassian.com/
Setup a CI/CD self-hosted runner in the same network than the server. https://support.atlassian.com/bitbucket-cloud/docs/runners/ Use that runner for the deployment step to ssh into your server private IP address.
Setup a pull-based deployment strategy. Your deployment step only registers the new wanted version, e.g. moving a release/production branch on the commit ref where the deploy was triggered. In your server, setup some kind of subscription to the latest release/deployment, e.g. a cron task to frequently fetch the production git branch. Upon changes, restart services and run any task you need. ansible-pull can be handy for this purpose.

Related

CI/CD to multiple clients environments

I have a Spring Boot application under gitlab. Today I'm using gitlab CI/CD and have some steps:
Build application (generate jar)
Test
Delivery in client (01) machine correct folder.
Start application using production profile, using Systemd in centos 8. This profile contains specific configs to client production.
Important: Gitlab runner runs under client infrastructure because we can't access your infrastructure from the internet.
Now I got a new client (02) that should use same project code, but in totally different environment. Let's see some difficulties:
The gitlab runner runs under client 01 only
Client 02 don't accept access from internet to your infrastructure too, like client 01. So, I can use same Gitlab runner to both clients.
I should have a application.properties to each client, today I just have it separated by profiles inside code.
Client 02 use centos6, so I don't have Systemd to start service. I should use SysV. So, a different approach to start service.
I thought in one solution to this but I would like to know if exist something better, I really like the Gitlab ci/cd interface.
My solution: install Jenkins in each client infrastructure and make it listen Gitlab commits. Each Jenkins, in each client will take care about install and configure application.
To solve the problem with application.properties, I will create a repository just for it, separated with client name. So, I will have something like this: application-client01.properties and etc. Jenkins will just pull correct properties, because will be installed in client environment.
Some better solution ?

Docker/Kubernetes with on premise servers

I have a .NET core web API and Angular 7 app that I need to deploy to multiple client servers, potentially running a plethora of different OS setups.
Dockerising the whole app seems like the best way to handle this, so I can ensure that it all works wherever it goes.
My question is on my understanding of Kubernetes and the distribution of the application. We use Azure Dev Ops for build pipelines, so if I'm correct would it work as follows:
1) Azure Dev Ops builds and deploys the image as a Docker container.
2) Kubernetes could realise there is a new version of the docker image and push this around all of the different client servers?
3) Client specific app settings could be handled by Kubernetes secrets.
Is that a reasonable setup? Have I missed anything? And are there any recommendations on setup/guides I can follow to get started.
Thanks in advance, James
Azure DevOps will perform the CI part of your pipeline. Once it is completed, Azure DevOps will push images to ACR. CD part should be done either directly from Azure DevOps (You may have to install a private agent on your on-prem servers & configure firewall etc) or Kubernetes native CD tools such as Spinnaker or Jenkins-X. Secrets should be kept in Kubernetes secrets.

How can I access my Jenkins dashboard on my remote droplet server?

I'm little confused about Jenkins and was hoping someone could clarify some matter for me.
After reading up on Jenkins, both from official docs and various tutorials I get this:
If I wanna set up auto deplyoment or anything Jenkins related, I could just install docker jenkins image, launch it and access it via localhost. That is clear to me.
Then, I just put Jenkinsfile into my repository, so that it knows what and how to build my repo and stuff.
The questions that I have are:
It seems to me that Jenkins needs to be running all the time, so that it can watch for all the repo changes and trigger code building, testing and deploying. If that is the case, I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
I'll try to deploy my backend and front apps on docker-compose file on my server. I'm not sure where does Jenkins integrates in all that.
How Jenkins can watch for all the repository changes and trigger code building, testing and deploying?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
Jenkins and other automation servers offer two options to watch source code changes:
Poll SCM: Download and compare source code at predefined intervals.This is simple but, hardware consumption is elevated and is a little outdated
Webhooks: Optimal functionality offered by github, bitbucket, gitlab, etc. in which Github, for example, at any git event, makes an http request to your automation server, sending all the information like branch name, commit author, etc). Here more info about webhooks and jenkins.
If you don't want a 24/7 dedicated server, you can use:
Some serverless platform or just a simple application able to receive http posts + webhook strategy. For instance, Github will perform a post requet to your app/servlerless and at this point, just execute your build, test or any other commands to deploy your application.
https://buddy.works/. It is like a mini-jenkins.
If I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
Yes. Jenkins is an automation server, so it needs its own dedicated server.
You can install jenkins manually or use docker in your droplet. Configure 8080 port for your jenkins. If everyting is ok, just access to your droplet public ip offered by digitalocean, like: http://197.154.458.456:8080. This url must load the Jenkins dashboard.

How can I deploy to Google Compute Engine via CI/CD

I have a jar and a docker image that I wish to deploy to my Compute Engine instance and run docker compose down/up after it being there. I can use git on the instance if that helps.
I want to do this using CI/CD tools, something like Google cloud build, gitlab, bitbucket pipelines. Ideally something that has a free tier.
I am aware this might be a bit vague, so am willing to add more details if necessary
In your case you can try Jenkins and use an ssh plugin to execute commands on your remote instance and send the files. There are some considerations that you might want to take before doing that.
1.- Add your ssh keys in the metadata for that instance .
2.- Make sure your firewall rules allow incoming traffic on port 22.
Once your instance allows incoming traffic on port 22 and you’d installed the ssh plugin, you just have to type the commands (docker-compose up/down) in the ssh section added by the plugin.

Deploying code on multiple server with Jenkins

I'm new to Jenkins, and I like to know if it is possible to have one Jenkins server to deploy / update code on multiple web servers.
Currently, I have two web servers, which are using python Fabric for deployment.
Any good tutorials, will be greatly welcomed.
One solution could be to declare your web servers as slave nodes.
First thing, give jenkins credentials to your servers (login/password or ssh login+private key or certificate. This can be configured in the "Manage credentials" menu
Then configure the slave nodes. Read the doc
Then, create a multi-configuration job. First you have to install the matrix-project plugin. This will allow you to send the same deployment intructions to both your servers at once
Since you are already using Fabic for deployment, I would suggest installing Fabric on the Jenkins master and have Jenkins kick off the Fabric commands to deploy to the remote servers. You could set up the hostnames or IPs of the remote servers as parameters to the build and just have shell commands that iterate over them and run the Fabric commands. You can take this a step further and have the same job deploy to dev/test/prod just by using a different set of hosts.
I would not make the webservers slave nodes. Reserve slave nodes for build jobs. For example, if you need to build a windows application, you will need a windows Jenkins slave. IF you have a problem with installing Fabric on your Jenkins master, you could create a slave node that is responsible for running Fabric deploys and force anything that runs a fabric command to use that slave. I feel like this is overly complex but if you have a ton of builds on your master, you might want to go this route.

Resources