Is there any way to specify which git branch code to be deployed to an Elastic Beanstalk environment?
Assume, I have two git branches named test and stage and I have an Elastic Beanstalk environment named test-env.
Now, I set the branch defaults in config.yml as below:
branch-defaults:
test:
environment: test-env
group_suffix: null
global:
application_name: test
default_ec2_keyname: abcde
default_platform: Ruby 2.2 (Puma)
default_region: us-west-2
profile: eb-cli
sc: git
Now, what I need is if I deploy from stage branch with eb deploy test-env it should automatically deploy the code from test branch or it should throw an error..
Is there any way to do it. If no please suggest me some other way to do it..
Thanks..
This isn't something that the EB CLI supports; it will always run a deployment from the current branch. However, it's certainly something you can script (I'm assuming you're running under bash, it wouldn't be too hard to port to Windows command shell using the for command to extract the current branch name):
deployTest.sh:
#!bin/bash
# Grab the current branch name
current=`git rev-parse --abbrev-ref HEAD`
# Just in case we have any in-flight work, stash it
git stash
# Switch to 'test' branch
git checkout test
# Deploy to test-env
eb deploy test-env
# Switch back to whatever branchwe were on
git checkout $current
# Restore in-flight work
git stash pop
Related
I am running a CI/CD pipeline as a test in Jenkins. The first task in this pipeline it to clone a repository
I am getting an error that says
cd /var/lib/jenkins/workspace/MyProjectPipeline-Dev/docker/apache
/var/lib/jenkins/workspace/MyProjectPipeline-Dev#tmp/durable-2f74d056/script.sh: line 9: cd: /var/lib/jenkins/workspace/MyProjectPipeline-Dev/docker/apache: No such file or directory
This pipeline is set up on an AWS EC2 instance. I installed git on this instance so I dont know why the clone isnt working.
Here is the log for the pipeline:
Because when you clone a git hub repo with git clone https://github.com/subsari/snippets.git it clones it into a directory snippets, so your docker/apache directory is actually inside the /var/lib/jenkins/workspace/MyProjectPipeline-Dev/snippets/
You need to cd as
cd /var/lib/jenkins/workspace/MyProjectPipeline-Dev/snippets/docker/apache
or you can also use the dir in your Jenkinsfile sa
dir("snippets/docker/apache"){
sh "pwd"
sh './script.sh'
}
I want to trigger a pipeline on my OpenShift when an event occurs on bitbucket (push for example). I configured a webhook correctly following the instructions on Openshift documentation pages. Although I had to change my Openshift template of my pipeline which generated some conflicts.
The BuildConfig looks like this:
- apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "${SERVICE_NAME}-pipeline"
spec:
source:
contextDir: '${APPLICATION_GIT_JENKINSFILE_REPO_CONTEXT_DIR}'
git:
ref: master
uri: '${APPLICATION_GIT_JENKINSFILE_REPO}'
sourceSecret:
name: git-secret
type: Git
strategy:
jenkinsPipelineStrategy:
jenkinsfilePath: Jenkinsfile
triggers:
type: "Bitbucket"
bitbucket:
secretReference:
name: "mysecret"
So, on the 'source' component I reference a git repository where my Jenkinsfile is located. This way I can have many pipelines with only a single Jenkinsfile centralized. Note that this repo is completly different from the location of the api where I'm configuring the webhook.
This approach although fails on an automatic trigger due to the fact that the payload sent to the Openshift has the commit id of the changes of the respective api repository. Openshift (I don't know why) tries to associate that commit with the repo that is present on this template (Jenkinsfile repo).
The logs are the following:
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://jenkinsfile-repo.git # timeout=10
Fetching upstream changes from http://jenkinsfile-repo.git
> git --version # timeout=10
using GIT_ASKPASS to set credentials git-secret
> git fetch --tags --progress http://jenkinsfile-repo.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse 79370e4fa88f19c693d85d82fbdbed77620d048b^{commit} # timeout=10
hudson.plugins.git.GitException: Command "git rev-parse 79370e4fa88f19c693d85d82fbdbed77620d048b^{commit}" returned status code 128:
stdout: 79370e4fa88f19c693d85d82fbdbed77620d048b^{commit}
stderr: fatal: ambiguous argument '79370e4fa88f19c693d85d82fbdbed77620d048b^{commit}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2016)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1984)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1980)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1612)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1624)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.revParse(CliGitAPIImpl.java:809)
at hudson.plugins.git.GitAPI.revParse(GitAPI.java:316)
at hudson.plugins.git.RevisionParameterAction.toRevision(RevisionParameterAction.java:98)
at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:1070)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1187)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:113)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:144)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:303)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
There we can see the behavior I tried to explain, the '79370e4fa88f19c693d85d82fbdbed77620d048b' was the commit id on the api repo, which OpenShift is trying to associate with the jenkinsfile repo.
If I could for example ignore the payload the problem wouldn't exist.
Thanks for the help.
I don't think you require put type: git(duplicate) and also try using https:// bitbucket url,
source:
contextDir: '${APPLICATION_GIT_JENKINSFILE_REPO_CONTEXT_DIR}'
git:
ref: master
uri: '${APPLICATION_GIT_JENKINSFILE_REPO}'
sourceSecret:
name: git-secret
type: Git *****remove and try ?
I managed to implement a workaround, although I still don't understand why the behavior I specified previously occurs.
Basically the current solution looks like this:
The openshift template references the GIT repository of the respective API, and that repository has a Jenkinsfile of his own, which is the same for every API. Although, the only thing this Jenkinsfile does is to call a groovy script that is centralized in a separate GIT repository and is declared as a shared library in Jenkins.
This way if we must change something, a stage of a pipeline for example, we only need to change in a single location, which was the objective from the begginning.
Is there any way in which I can pass the branch as a dyanamic variable to circle.yml.
example:
suppose I want to deploy a 'random_new' branch to server if all test are green how can I achive that in circle.yml
deployment:
staging:
branch: $dynamic_branch || master
commands:
- curl -X POST https://hooks.cloud66.com/stacks/redeploy/my_cloud_redeploy_url
more details here
I've got a Lektor site that I'm trying to deploy automatically in response to pull requests and commits, using the Travis CI trigger approach from the Lektor docs.
The Lektor configuration works fine from the command line.
The Travis build starts, and appears to build the site without problem - but when it gets to deployment, the log says the following:
Installing deploy dependencies
!!! Script support is experimental !!!
Preparing deploy
Cleaning up git repository with `git stash --all`. If you need build artifacts for deployment, set `deploy.skip_cleanup: true`. See https://docs.travis-ci.com/user/deployment/#Uploading-Files.
No local changes to save
Deploying application
Deploying to ghpages-https
Build cache: /home/travis/.cache/lektor/builds/d3a411e13041731555222b901cff4248
Target: ghpages+https://pybee/pybee.github.io?cname=pybee.org
Initialized empty Git repository in /home/travis/build/pybee/pybee.github.io/temp/.deploytemp9xhRDc/scratch/.git/
Fetching origin
fatal: repository 'https://github.com/pybee/pybee.github.io/' not found
error: Could not fetch origin
fatal: repository 'https://github.com/pybee/pybee.github.io/' not found
Done!
For a full log, see here.
I've checked the credentials in the Travis CI configuration for the repository; I'm as certain as I can be that they're correct. I've tried using the same configuration (exporting LEKTOR_DEPLOY_USERNAME and LEKTOR_DEPLOY_PASSWORD locally), and it works fine.
hammer:pybee.org rkm$ lektor deploy ghpages-https
Deploying to ghpages-https
Build cache: /Users/rkm/Library/Caches/Lektor/builds/a269cf944d4302f15f78a1dfb1602486
Target: ghpages+https://pybee/pybee.github.io?cname=pybee.org
Initialized empty Git repository in /Users/rkm/projects/beeware/pybee.org/temp/.deploytempOh4p98/scratch/.git/
Fetching origin
From https://github.com/pybee/pybee.github.io
* [new branch] master -> origin/master
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean
Everything up-to-date
Done!
Any suggestions on the cause of this error?
It turns out this is a bug in Lektor.
If you use the following in your <project>.lektorproject:
[servers.ghpages-https]
target = ghpages+https://pybee/pybee.github.io?cname=pybee.org
and the following in your .travis.yml:
language: python
python: 2.7
cache:
directories:
- $HOME/.cache/pip
- $HOME/.cache/lektor/builds
install: "pip install git+https://github.com/singingwolfboy/lektor.git#fix-ghpages-https-deploy#egg=lektor"
script: "lektor build"
deploy:
provider: script
script: "lektor deploy ghpages-https"
on:
branch: lektor
(i.e., use the PR branch for deployment), builds will deploy as expected.
In my travis script I have the following:
after_success:
- ember build --environment=production
- ember build --environment=staging --output-path=dist-staging
After both of these build, I conditionally deploy to S3 the one that is appropriate, based on the current git branch.
It works, but it would save time if I only built the one I actually need. What is the easiest way to build based on the branch?
use the test command as used here.
after_success:
- test $TRAVIS_BRANCH = "master" &&
ember build
All travis env variables are available here.
You can execute shell script in after_success and check the current branch using travis environment variables:
#!/bin/sh
if [[ "$TRAVIS_BRANCH" != "master" ]]; then
echo "We're not on the master branch."
# analyze current branch and react accordingly
exit 0
fi
Put the script somewhere in the project and use it like:
after_success:
- ./scripts/deploy_to_s3.sh
There might be other useful travis variables to you, they are listed here.
With the following entry the script will only be executed if it is not a PR and the branch is master.
after_success:
- 'if [ "$TRAVIS_PULL_REQUEST" = "false" -a "$TRAVIS_BRANCH" = "master" ]; then bash doit.sh; fi'
It is not enough to evaluate TRAVIS_BRANCH. TRAVIS_BRANCH is set to master when a PR against master is created by a fork.
See also the description of TRAVIS_BRANCH on https://docs.travis-ci.com/user/environment-variables/:
for push builds, or builds not triggered by a pull request, this is the name of the branch
for builds triggered by a pull request this is the name of the branch targeted by the pull request
for builds triggered by a tag, this is the same as the name of the tag (TRAVIS_TAG)
If you work with tags you have to consider TRAVIS_TAG as well. If TRAVIS_TAG is set, TRAVIS_BRANCH is set to the value of TRAVIS_TAG.
after_success:
- if [ "$TRAVIS_PULL_REQUEST" = "false" -a \( "$TRAVIS_BRANCH" = "master" -o -n "$TRAVIS_TAG" \) ]; then doit.sh; fi
I would say the above solutions are good because they would transfer to non-travis-ci build systems as well, but there is a feature in TravisCI for similar to this:
stages:
- name: deploy
# require the branch name to be master (note for PRs this is the base branch name)
if: branch = master
Although I could not get it to work with after_success, the following page has a section on "Testing Conditions" which I didn't bother setting that up.
https://docs.travis-ci.com/user/conditional-builds-stages-jobs/