How to generate ".env" when Deploying with Bitbucket AWS CodeDeploy add-on?
I see bitbucket-pipelines.yml can generate .env from bitbucket environment variables, but, how tie it up with Bitbucket AWS CodeDeploy add-on?
appspec.yml - can trigger script on deployment but how can I make it get .env from bitbucket environment variables?
BitBucket should not create .env, this service should know nothing about production .env. Instead the production .env should sit on secure AWS S3 bucket where only AWS CodeDeploy scripts can take it and put on the instance.
it would be copied like this
sudo aws --region us-east-2 s3 cp "s3://${S3_NAME}/prod.env" "${EC2_DIRECTORY}/.env"
Looking at the documentation, BitBucket should make the environments available in the build environment, and you should be able to access them directly in your scripts run by your appspec.yml just like you would access any other environment variables.
For example, if we had an appspec like this:
hooks:
AfterInstall:
- location: scripts/runTests.sh
timeout: 180
You could access the environment variables in scripts/runTests.sh like this:
# scripts/runTests.sh
echo "$BITBUCKET_BUILD_NUMBER"
# Or, use in some other valid way in your script
Related
How can I deploy aws resources using external jenkins and terraform. (I don`t like my jenkins running in ec2 or in aws) because it may terminate at any time and every time I have to build from ami or all steps that I do on first time. I mean to say save all settings and credentials etc. So, I looking for some solution to install it on my VM/virtual box and then run pipeline job there and build aws resources/ services using terraform.
You can run terraform or jenkins from anywhere to create resources in AWS.
Jenkins is just an orchestrator tool which will use terraform to create resources.
We only need to change how terraform interact with your AWS environment.
if you are having terraform on one of the AWS EC2 you can utilize EC2 metadata to interact/authenticate with AWS.
now as you move towards your local system or VM you have to change the way how you authenticate with terraform.
you can use below code in terraform to authenticate with AWS
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
please refer terraform documentation for more authentication methods
https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
I am trying to upload git repo through Jenkins job. I cant able to find any documentation related to file upload through Jenkins job. Can any please let me know how to upload git repo from Jenkins job to S3 bucket.
Thanks in advance.
You may use aws cli to accomplish that.
Install aws cli for on the server jenkins is hosted in and make sure jenkins user can use it.
In your Job, set the following Environment Variables:
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=
In your jenkins job command use:
aws s3 cp . s3://target-bucket/target-path/ --recursive
And so, your target path will have all the codebase after job completes.
Documentation talks about provisioning Docker containers.
Ansible can be used for environment provisioning with Jenkins.
Using pipeline script , I would like to provision an AWS EC2 instance on AWS cloud using AWS CloudFormation template
Can Jenkins pipeline script reuse CloudFormation templates for provisioning on AWS cloud?
Yes you can use the Jenkins pipeline for provisioning resources on cloud. You can store ur cloudformation code either in SVN or GIT and write a script to pull those resources from SVN and GIT and provision resources using "aws cli" commands in the script you create and use to deploy resources to cloud.
You can create separate jobs for different stages of pipeline and make it work.
I have, as I think, simple use case, when jenkins builds static website, so in the end of the build, I have a folder like $WORKSPACE/site-result.
Now I want to upload this folder to S3 (and clean bucket if something already there). How can I do it?
I'm using pipeline, but can switch to freestyle project if necessary. So far I installed S3 Plugin (S3 publisher plugin). Created IAM user. Added credentials to "Configure system" section. And can't find any further info. Thanks!
If the answer suggesting the Pipeline AWS Plugin doesn't work, you could always have an upload step in your pipeline where you use sh call the AWS CLI:
aws s3 cp $WORKSPACE/site-result s3://your/bucket --recursive --include "*"
Source: http://docs.aws.amazon.com/cli/latest/reference/s3/
You have to use s3Upload plugin and set the sourceFile parameter as '*/*'
I'm considering migration for custom hosted Rails app to Elastic Beanstalk.
I've create a simple Rails app and manage to deploy it to on Elastic beanstalk. There are still a few thing I still didn't manage to get:
How can I deploy a branch or a specific code to my app?
Is the deployed version is from last commit or my current workspace?
What are the best practices when handling deployment on Beanstalk?
Amazon have this document (link) but it seems to be deprecated and I can't figure how to do it on current version
elad:...$ eb --version
EB CLI 3.7 (Python 2.7.1)
I'm not sure my solution is the best practices or not, I just show here, welcome all comments on this.
How can I deploy a branch or a specific code to my app?
Beanstalk support deploy the last commit in current branch (which was actually uploaded to S3 firstly) by using EB command line
Deploy from a zipped file which was also actually updated to S3 after that
Here is what in in your environment settings in Beanstalk console
Is the deployed version is from last commit or my current workspace?
From last commit
3.What are the best practices when handling deployment on Beanstalk?
My solution #1: Define which branch will be deployed to a specific environment
In .elasticbeanstalk/config.yml
# .....
branch-defaults:
develop:
environment: mercury-dev-staging
master:
environment: mercury-dev
# .....
Relying on this config, I always switch to develop branch to deploy to mercury-dev-staging env, and master one for mercury-dev. This will avoid some mistakes like deploying develop branch to production env
My solution #2: Define some alias commands for quick deployment:
In ~/.bash_profile (I'm using MacOS)
alias deploy_production="eb deploy mercury-dev;"
alias deploy_staging="eb deploy mercury-dev-staging;"
Now I just type deploy_staging for staging deployment, this is convenient but risky, because you may deploy your developing feature to production.
Someone considering their options could take a look at AWS Code Pipeline. You define the specific GitHub repo branch. If you push a change to that branch, Code Pipeline detects it and starts a pipeline process.
This is relevant to Elastic Beanstalk because on Step 4 of Code Pipeline, you can deploy to AWS Elastic Beanstalk (among others).