I have a Spring Boot application under gitlab. Today I'm using gitlab CI/CD and have some steps:
Build application (generate jar)
Test
Delivery in client (01) machine correct folder.
Start application using production profile, using Systemd in centos 8. This profile contains specific configs to client production.
Important: Gitlab runner runs under client infrastructure because we can't access your infrastructure from the internet.
Now I got a new client (02) that should use same project code, but in totally different environment. Let's see some difficulties:
The gitlab runner runs under client 01 only
Client 02 don't accept access from internet to your infrastructure too, like client 01. So, I can use same Gitlab runner to both clients.
I should have a application.properties to each client, today I just have it separated by profiles inside code.
Client 02 use centos6, so I don't have Systemd to start service. I should use SysV. So, a different approach to start service.
I thought in one solution to this but I would like to know if exist something better, I really like the Gitlab ci/cd interface.
My solution: install Jenkins in each client infrastructure and make it listen Gitlab commits. Each Jenkins, in each client will take care about install and configure application.
To solve the problem with application.properties, I will create a repository just for it, separated with client name. So, I will have something like this: application-client01.properties and etc. Jenkins will just pull correct properties, because will be installed in client environment.
Some better solution ?
Related
I need some guidance and advice please on if / how I can implement a CI / CD pipeline for a corporate PHP Laravel application that is hosted on an internal server with limited access and no public IP. Unfortunately my DevOps knowledge is limited and despite a lot of Google searches, I have no idea if I am on the right track or if I am a million miles away. Everything that I have read so far looks at using Web Hooks which as far as I can tell are reliant on a public IP / domain.
At present I can work on the application on my laptop and push changes to BitBucket Cloud. While I have managed to create a bitbucket-pipelines.yml file that will automatically build and test any branches that are pushed, I then have to access the server to pull the code manually and run the various scripts as required, which I would much prefer to automate.
I need to understand please what options there are for implementing continuous deployment given the circumstances and limitations?
If anyone could offer some pointers then I would be very grateful.
Thanks
You can't push to a machine that is unreachable but there are some alternatives.
Configure a bastion host, so an external CI/CD runner can ssh through it into your server. Allow inbound port 22 connections from your CI/CD provider CIDR blocks to your bastion host https://ip-ranges.atlassian.com/
Setup a CI/CD self-hosted runner in the same network than the server. https://support.atlassian.com/bitbucket-cloud/docs/runners/ Use that runner for the deployment step to ssh into your server private IP address.
Setup a pull-based deployment strategy. Your deployment step only registers the new wanted version, e.g. moving a release/production branch on the commit ref where the deploy was triggered. In your server, setup some kind of subscription to the latest release/deployment, e.g. a cron task to frequently fetch the production git branch. Upon changes, restart services and run any task you need. ansible-pull can be handy for this purpose.
My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.
I have a .NET core web API and Angular 7 app that I need to deploy to multiple client servers, potentially running a plethora of different OS setups.
Dockerising the whole app seems like the best way to handle this, so I can ensure that it all works wherever it goes.
My question is on my understanding of Kubernetes and the distribution of the application. We use Azure Dev Ops for build pipelines, so if I'm correct would it work as follows:
1) Azure Dev Ops builds and deploys the image as a Docker container.
2) Kubernetes could realise there is a new version of the docker image and push this around all of the different client servers?
3) Client specific app settings could be handled by Kubernetes secrets.
Is that a reasonable setup? Have I missed anything? And are there any recommendations on setup/guides I can follow to get started.
Thanks in advance, James
Azure DevOps will perform the CI part of your pipeline. Once it is completed, Azure DevOps will push images to ACR. CD part should be done either directly from Azure DevOps (You may have to install a private agent on your on-prem servers & configure firewall etc) or Kubernetes native CD tools such as Spinnaker or Jenkins-X. Secrets should be kept in Kubernetes secrets.
I'm little confused about Jenkins and was hoping someone could clarify some matter for me.
After reading up on Jenkins, both from official docs and various tutorials I get this:
If I wanna set up auto deplyoment or anything Jenkins related, I could just install docker jenkins image, launch it and access it via localhost. That is clear to me.
Then, I just put Jenkinsfile into my repository, so that it knows what and how to build my repo and stuff.
The questions that I have are:
It seems to me that Jenkins needs to be running all the time, so that it can watch for all the repo changes and trigger code building, testing and deploying. If that is the case, I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
I'll try to deploy my backend and front apps on docker-compose file on my server. I'm not sure where does Jenkins integrates in all that.
How Jenkins can watch for all the repository changes and trigger code building, testing and deploying?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
Jenkins and other automation servers offer two options to watch source code changes:
Poll SCM: Download and compare source code at predefined intervals.This is simple but, hardware consumption is elevated and is a little outdated
Webhooks: Optimal functionality offered by github, bitbucket, gitlab, etc. in which Github, for example, at any git event, makes an http request to your automation server, sending all the information like branch name, commit author, etc). Here more info about webhooks and jenkins.
If you don't want a 24/7 dedicated server, you can use:
Some serverless platform or just a simple application able to receive http posts + webhook strategy. For instance, Github will perform a post requet to your app/servlerless and at this point, just execute your build, test or any other commands to deploy your application.
https://buddy.works/. It is like a mini-jenkins.
If I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
Yes. Jenkins is an automation server, so it needs its own dedicated server.
You can install jenkins manually or use docker in your droplet. Configure 8080 port for your jenkins. If everyting is ok, just access to your droplet public ip offered by digitalocean, like: http://197.154.458.456:8080. This url must load the Jenkins dashboard.
I have a jar and a docker image that I wish to deploy to my Compute Engine instance and run docker compose down/up after it being there. I can use git on the instance if that helps.
I want to do this using CI/CD tools, something like Google cloud build, gitlab, bitbucket pipelines. Ideally something that has a free tier.
I am aware this might be a bit vague, so am willing to add more details if necessary
In your case you can try Jenkins and use an ssh plugin to execute commands on your remote instance and send the files. There are some considerations that you might want to take before doing that.
1.- Add your ssh keys in the metadata for that instance .
2.- Make sure your firewall rules allow incoming traffic on port 22.
Once your instance allows incoming traffic on port 22 and you’d installed the ssh plugin, you just have to type the commands (docker-compose up/down) in the ssh section added by the plugin.