s3 bucket already exists in all stages except one - serverless

I am deploying a cloudformation serverless yaml with a S3 and S3 policy resource. I have stages bob, dev, and stage. Then the bucket name has pattern mybucket-<stage>. When I deploy to bob, the deployment is successful. When I try the other stages, I get error like An error occurred: mybucket-dev already exists. Do you know how to fix the issue?

Related

AWS Elastic Container Service CI Template issue

I’m on gitlab.com and tried deploying to a fargate AWS ECS container using the instructions for including the Deploy-ECS.gitlab-ci.yml template found here.
It is failing with the following error:
Authenticating with credentials from job payload (GitLab Registry)
$ ecs update-task-definition
An error occurred (InvalidParameterException) when calling the UpdateService operation: Task definition does not support launch_type FARGATE.
Running after_script
00:01
Uploading artifacts for failed job
00:02
ERROR: Job failed: exit code 1
I believe I may have found a solution here where Ryangr() advises that the --requires-compatibilities "FARGATE" flag needs to be on added to the aws ecs register-task-definition command. This is supported by the AWS documentation
In the AWS Management Console, for the Requires Compatibilities field, specify FARGATE.
In the AWS CLI, specify the --requires-compatibilities option.
In the Amazon ECS API, specify the requiresCompatibilities flag.
I'd like to know if there is a way to override the Deploy-ECS.gitlab-ci.yml template and add that or if I just need to submit an issue ticket with GitLab.
Check again with GitLab 13.2 (July 2020):
Bring Fargate support for ECS to Auto DevOps and ECS template
We want to make AWS deployments easier.
In order to do so, we recently delivered a CI/CD template that deploys to AWS ECS:EC2 targets and even connected it to Auto DevOps.
Scaling container instances in EC2 is a challenge, so many users choose to use AWS Fargate instead of EC2 instances.
In this release we added Fargate support to the template, which continues to work with Auto DevOps as well, so more users can benefit from it.
This is linked to issue 218841 which includes the proposal:
Use the gitlab-ci.yml template for deploying to AWS Fargate.
We will enforce the presence of --requires-compatibilities argument from the launch type - this will only be passed in case Fargate is selected.
If ECS is selected as the launch type this is ignored.
As noted by the David Specht in the comments, this has been closed with issue 218798 and cloud-deploy MR (Merge Request) 16, in commit 2c3d198.

how to configure jenkins pipeline project with no SCM option

We are migrating our source code repository to cloud bucket and all the source code that jenkins uses will be read/downloaded from the bucket like S3.
This also involves rewriting our jenkins pipeline that reads from SCM (git). The Jenkis pipeline project configuration dosen'allow any independent script execution (say weget or download file from bucket using shell)
I would like to do following ,if possibe
1) Download the jenkinsfile from S3 bucket to workspace
2) Choose None for SCM in Pipeline section
3) Give path to downloaded jenkinsfile in script path
My question is , how can I make #1 possible?. Image attached.

Isolating Secrets for Pipelines in Jenkins

We are implementing a GitOps like CI/CD in Jenkins. Where we are deploying to Openshift/Kubernetes. For sake of simplicity lets say we have only 2 repositories:
First with the application source code , there is also Jenkinsfile in the source that defines the build. (that also pushes images to a repository.)
We ha a second repository where the deployment pipeline is defined (jenkinsfile). This pipeline deploys image to production (think "kubectl apply").
The problem is that the pipeline (2) needs to access credentials that are used to authenticate (against kubernetes api) to productions. We thought to store these credentials in Jenkins. Where we don't want in same Jenkins the first (1) pipeline to have access to these production credentials.
How could we solve this with Jenkins? (How to store these credentials)
thank you
Just to capture from the comments, there's effectively an answer from #RRT in another thread ( https://stackoverflow.com/a/42721809/9705485 ) :
Using the Folders and Credentials Binding plugin, you can define credentials on the folder level that are only available for the job(s) inside this folder. The folder level store becomes available once you made the folder.
Source: https://support.cloudbees.com/hc/en-us/articles/203802500-Injecting-Secrets-into-Jenkins-Build-Jobs
Another example of adding scoped credentials (this one for dockerhub credentials) is https://liatrio.com/building-docker-jenkins-pipelines/

Trigger Jenkins job when a S3 file is updated

I'm looking for a way to trigger my Jenkins job whenever a file is created or updated in S3.
I can't seem to find anything by usual means of search. It is always upload artifacts to S3, but rarely download and even then I can't seem to find a way to trigger of the actual update process.
The only way I currently can figure out how to do this at all, would be to sync the file periodically and compare the hash to previous versions, but that is a really terrible solution.
The idea behind this would be to have an agency (which does not have access to our Jenkins) upload their build artifacts and to trigger a deployment from that.
You can use a combination of SNS Notifications for new artifacts in the S3 bucket https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html and the Jenkins AWS SQS plugin to trigger a build (https://github.com/jenkinsci/aws-sqs-plugin)
A little bit of manual configuration is required in terms of the AWS SQS plugin, but it should work.
S3 Upload > SNS Notification > Publish to SQS > Trigger Jenkins Build
Ideally it would be straight to Jenkins like so: S3 Upload > SNS Notification > Publish to Jenkins HTTP Endpoint > Trigger Jenkins Build
Hope this helps
We can write a icron job if linux or powershell script if windows, which queries a particular s3 bucket for the given string, if it finds then u can trigger the Jenkins job.
For doing this, the Jenkins instance must be in the AWS itself if we are trying to add an IAM role, if not we need to add aws credentials.
To implement S3 Upload > Publish to SQS > Trigger Jenkins Build (assuming you have appropriate AWS Users, Roles and Policies attached):
Create an AWS SQS Queue
After creating an AWS SQS Queue, on AWS S3 bucket we need to configure:
S3 Bucket "Events" section to register an "Object Create" event
Provide the SQS Queue name. Detailed documentation.
On Jenkins, we need to:
Install Plugin AWS SQS from the Jenkins Install Plugin Page
Configure AWS SQS Plugin to point to SQS queue in Jenkins System Configuration
Configure the Jenkins Job to "Trigger build when a message is published to an Amazon SQS queue"
Note that Jenkins user MUST have Read access to SQS(all Read fucntions) in addition to S3 access.
Now whenever someone adds/updates anything on the bucket S3 sends an event notification the SQS which is then polled by the Jenkins AWS SQS plugin and the respective Job Build is triggered!
This article explains the process in detail AWS to Github to Jenkins. If you are just using S3 then you would skip the Github part.

Jenkins Pipeline - How to upload folder to S3?

I have, as I think, simple use case, when jenkins builds static website, so in the end of the build, I have a folder like $WORKSPACE/site-result.
Now I want to upload this folder to S3 (and clean bucket if something already there). How can I do it?
I'm using pipeline, but can switch to freestyle project if necessary. So far I installed S3 Plugin (S3 publisher plugin). Created IAM user. Added credentials to "Configure system" section. And can't find any further info. Thanks!
If the answer suggesting the Pipeline AWS Plugin doesn't work, you could always have an upload step in your pipeline where you use sh call the AWS CLI:
aws s3 cp $WORKSPACE/site-result s3://your/bucket --recursive --include "*"
Source: http://docs.aws.amazon.com/cli/latest/reference/s3/
You have to use s3Upload plugin and set the sourceFile parameter as '*/*'

Resources