How can I run a script on my server using gcloud compute? - ruby-on-rails

I'm deploying my Rails apps on Compute Engine, and my code is hosted at Github. I want to push changes to my master branch, and then execute a gcloud compute command to tell my instances to pull the master repository and restart nginx.
If I can't execute a script from SSH, what's the best way to tell my instances to update to the latest git commit and restart, so my apps are running on the latest codebase?
I've tried using the Release Pipeline, but it doesn't seem to work for Rails.

You can use a server automation system for something like this. For example:
Salt Stack allows remote command invocation as well as a thousand other useful server management features.
Ansible, which is built on top of SSH, is great for running commands remotely.
Most other server automation systems (chef, puppet) also provide some way to run a command remotely.

Related

Gitlab CI: How to configure cypress e2e tests with multiple server instances?

My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.

Using jenkins and docker to deploy to server

Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website

How can I access my Jenkins dashboard on my remote droplet server?

I'm little confused about Jenkins and was hoping someone could clarify some matter for me.
After reading up on Jenkins, both from official docs and various tutorials I get this:
If I wanna set up auto deplyoment or anything Jenkins related, I could just install docker jenkins image, launch it and access it via localhost. That is clear to me.
Then, I just put Jenkinsfile into my repository, so that it knows what and how to build my repo and stuff.
The questions that I have are:
It seems to me that Jenkins needs to be running all the time, so that it can watch for all the repo changes and trigger code building, testing and deploying. If that is the case, I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
I'll try to deploy my backend and front apps on docker-compose file on my server. I'm not sure where does Jenkins integrates in all that.
How Jenkins can watch for all the repository changes and trigger code building, testing and deploying?
If Jenkins doesn't need to be up and running 24/7, then how does it watch for any changes?
Jenkins and other automation servers offer two options to watch source code changes:
Poll SCM: Download and compare source code at predefined intervals.This is simple but, hardware consumption is elevated and is a little outdated
Webhooks: Optimal functionality offered by github, bitbucket, gitlab, etc. in which Github, for example, at any git event, makes an http request to your automation server, sending all the information like branch name, commit author, etc). Here more info about webhooks and jenkins.
If you don't want a 24/7 dedicated server, you can use:
Some serverless platform or just a simple application able to receive http posts + webhook strategy. For instance, Github will perform a post requet to your app/servlerless and at this point, just execute your build, test or any other commands to deploy your application.
https://buddy.works/. It is like a mini-jenkins.
If I'd have to install Jenkins on my droplet server. But how do I then access my dashboard, if all I have is ssh access?
Yes. Jenkins is an automation server, so it needs its own dedicated server.
You can install jenkins manually or use docker in your droplet. Configure 8080 port for your jenkins. If everyting is ok, just access to your droplet public ip offered by digitalocean, like: http://197.154.458.456:8080. This url must load the Jenkins dashboard.

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

CI/CD with Jenkins and Vagrant

I wanted to build a Jenkins server which would run test of my puppet code on Vagrant. The issue I found is that the we run our server as VMs already, either in vmWare or AWS and Vagrant will not work as another virtualisation.
Does anyone have an idea how can I create a test platform for my puppet code. What I want to test the deployment of manifest on the nodes them self i.e. If I deploy a class web server or make changes to it I would like to check if it affects/breaks deployment of other classes.
The idea would be to iterate over all the classes/roles and see if the deployments are passing. I would like to make it automatic and independent of our engineers. At the moment we are running manual test with vagrant up however there are too many roles to do that by hand.
Any ideas how can I tackle this?
You can use either Docker or AWS provider for Vagrant.
In case of AWS provider you need to set-up RSync to get your environment into newly launched instance.
If your Vagrant scripts are robust, you can use the same script for both local deployment on your workstation and AWS/Docker deployment on CI server.
There are drawbacks to doing these techniques, in case of Docker you are limited to the same kernel that Jenkins server is running, in case of AWS you will incur additional costs. However, for AWS your don't need to allocate as much resources for your Jenkins server, so you might even save money this way because you will be using paying for extra VMs only when you are running you tests. Just make sure you will shut them down after you done.
Is there any special reason why you want to use vagrant? I'm not sure if you are setting up your production environment with vagrant or not.
In case you are not bound to vagrant, I would recommend you to think about using a docker image to prepare a lightweight environment to run your setups and verifications in.
When doing your tests, spin up a container from your image that contains your puppet distribution and run your setups/tests inside. If you have special kernel requirements, use a separate jenkins slave/agent machine rather than executing jobs on the jenkins master.
If you are not sure how to get started using jenkins with docker, have a look into the examples section of the Jenkins Documentation. The provided examples are showing the declarative pipeline syntax thats still a bit new. Also consider the collapsed Toggle Scripted Pipeline Sections which show the groovy pipeline scripts that are alot more forgiving for jenkins pipeline beginners.
Those should be quite good pointers to get started with running+testing your puppet scripts inside docker. For building and using a docker image there should be more than enough tutorials out there.
Let me know if this was a hint in the right direction or if I mistinterpreted your question.

Resources