Rollback AWS Beanstalk deploy via CLI - jenkins

I have a jenkins job which will build and deploy grails to AWS Beanstalk, and a script which then checks the HTTP status of the deployed page. Is there a way to rollback the change if it does not return a 200?

Grab what version is running before deployment and save it.
Deploy.
If it's not a 200OK just run a beanstalk deploy on the previous version you captured.
As far as "rollbacks" no, that's usually done in single transactions to make sure data consistency is held.

Related

Disabling all builds after migrating Jenkins

I am in the process of migrating a Jenkins server from an internal resource to AWS EC2. I have completed the copying of all files in /var/lib/jenkins. However, when I start Jenkins it immediately wants to run builds, and they all fail because I need to make some changes. Devs don't like the tons of emails.
How do I start Jenkins with all jobs/builds disabled by default, so I can test and configure things before cutting over to the new server installation?
Here is a useful link! This groovy script needs to be placed in $JENKINS_HOME/init.groovy

Elastic Beanstalk: Deploy branch to env

I'm considering migration for custom hosted Rails app to Elastic Beanstalk.
I've create a simple Rails app and manage to deploy it to on Elastic beanstalk. There are still a few thing I still didn't manage to get:
How can I deploy a branch or a specific code to my app?
Is the deployed version is from last commit or my current workspace?
What are the best practices when handling deployment on Beanstalk?
Amazon have this document (link) but it seems to be deprecated and I can't figure how to do it on current version
elad:...$ eb --version
EB CLI 3.7 (Python 2.7.1)
I'm not sure my solution is the best practices or not, I just show here, welcome all comments on this.
How can I deploy a branch or a specific code to my app?
Beanstalk support deploy the last commit in current branch (which was actually uploaded to S3 firstly) by using EB command line
Deploy from a zipped file which was also actually updated to S3 after that
Here is what in in your environment settings in Beanstalk console
Is the deployed version is from last commit or my current workspace?
From last commit
3.What are the best practices when handling deployment on Beanstalk?
My solution #1: Define which branch will be deployed to a specific environment
In .elasticbeanstalk/config.yml
# .....
branch-defaults:
develop:
environment: mercury-dev-staging
master:
environment: mercury-dev
# .....
Relying on this config, I always switch to develop branch to deploy to mercury-dev-staging env, and master one for mercury-dev. This will avoid some mistakes like deploying develop branch to production env
My solution #2: Define some alias commands for quick deployment:
In ~/.bash_profile (I'm using MacOS)
alias deploy_production="eb deploy mercury-dev;"
alias deploy_staging="eb deploy mercury-dev-staging;"
Now I just type deploy_staging for staging deployment, this is convenient but risky, because you may deploy your developing feature to production.
Someone considering their options could take a look at AWS Code Pipeline. You define the specific GitHub repo branch. If you push a change to that branch, Code Pipeline detects it and starts a pipeline process.
This is relevant to Elastic Beanstalk because on Step 4 of Code Pipeline, you can deploy to AWS Elastic Beanstalk (among others).

Continuous Integration: running Jenkins Build on newly created EC2 Instance using AWS Cloudformation+OpsWorks, What is best practice?

I'm looking for the best practice.
The current situation is:
I have a running Staging Instance on which Jenkins is installed. In Jenkins I have created a Job/Project which uses AWS Cloudformation and Opsworks to create a new EC2 Instance. I used Opsworks, Chef and Berkshelf to automatically download our GIT Repository from Bitbucket to the newly created EC2 Instance.
My Goal:
I want to setup a CI environment. I want to manually start the Jenkins Job on my Staging server. Then a complete new EC2 Instance is set up using Cloudformation and OpsWorks. (Up to this point everything already works and this way should be a good practice!?) Now I want to automatically execute my tests on the newly created EC2 Instance and save the results (tests passed?, results of code quality measuring tools (CodeSniffer,....) in Jenkins on my Staging Server. After running the tests I want to terminate the EC2 Instance.
I think I could install a Jenkins Slave on the created EC2 Instance and use my Staging Server as Jenkins master.
Is this best practice?
How could I achieve running tests on a fresh test server, saving results and shuting down the server afterwards?
Thank's for your help!
Best regards,
Chris
Have you considered using the jenkins AWS plugin, it should take care of creating a slave and registering it with jenkins for you:
https://wiki.jenkins-ci.org/display/JENKINS/Amazon+EC2+Plugin
You might find this helpful - https://github.com/jadekler/git-chef-basic-jenkins-ci. It's a little bit of chef around spinning up jenkins on an EC2 instance. You can adjust the jenkins jobs / chef scripts as needed (maybe you don't need ruby/java but instead need go or python). Welcome to help further if needed.

How to deploy a successful build using Travis CI and Scalr

We're currently evaluating CI servers and Travis CI caught our eye since it is a hosted solution. I haven't been able to find any information about it being able to deploy to Scalr though. Has anyone had any luck setting this up? I found information about using Jenkins to deploy to Scalr but I'd rather not go with Jenkins.
Thanks.
Deploying an application upon a Travis CI build success if functionally similar to deploying one upon a Jenkins success. All you need to do is to hook in to Scalr through its API when you build succeeds.
Using Travis CI, you can't really run arbitrary post-build shell scripts (unlike Jenkins). This makes integration a bit more complicated than using Jenkins (with Jenkins you just use the Scalr Command Line Tools to call the Scalr API), but it remains feasible.
All you need to do is have Travis CI send a notification to a Webhook Endpoint to a webapp you control (host that on your cloud infrastructure, or on e.g. Heroku), and have that webapp call the Scalr API.
Disclaimer: I work at Scalr.

How can I run a script on my server using gcloud compute?

I'm deploying my Rails apps on Compute Engine, and my code is hosted at Github. I want to push changes to my master branch, and then execute a gcloud compute command to tell my instances to pull the master repository and restart nginx.
If I can't execute a script from SSH, what's the best way to tell my instances to update to the latest git commit and restart, so my apps are running on the latest codebase?
I've tried using the Release Pipeline, but it doesn't seem to work for Rails.
You can use a server automation system for something like this. For example:
Salt Stack allows remote command invocation as well as a thousand other useful server management features.
Ansible, which is built on top of SSH, is great for running commands remotely.
Most other server automation systems (chef, puppet) also provide some way to run a command remotely.

Resources