Update timeout in Codebuild in a codepipeline using CDK - aws-cdk

I have the aws-simple-cicd pipeline deployed from https://github.com/awslabs/aws-simple-cicd. I tried to increase the code build timeout by going to https://github.com/awslabs/aws-simple-cicd/blob/main/lib/projects/deploy-project.ts and adding
timeout: Duration.hours(3),
at line 41.
Then I ran :
cdk synth
cdk deploy
I didn't see this change reflected in the CFN output or the deployment. What is going wrong here? Wat do I need to do in order for the codebuild timeout to be increased in CDK IaC. (I do not wish to change it directly on the aws console).

Related

Why is a Jenkins script job failing to use proper AWS credentials?

I have a simple jenkins job that just runs aws ssm send-command and it fails with:
"An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc is not authorized to perform: ssm:SendCommand on resource: arn:aws:ssm:us-east-1:1234567890:document/my-document-name"
However, the IAM permissions are correct. To prove it, I directly SSH onto that instance and run the exact same ssm command, and it works. I verify it's using the instance role by running aws sts get-caller-identity and it returns arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc which is the same user mentioned in the error message.
So indeed, this assumed role can run the command.
I even modified the jenkins job to run aws sts get-caller-identity first, and it outputs the same user json.
Does jenkins do some caching that I am unaware of? Why would I get that AccessDeniedException if that jenkins-live user can run the command otherwise?
First, install the AWS Credentials and AWS Steps plugins and register your AWS key and secret access key in Jenkins credential store. Then, the next steps depends if you're using a freestyle or a declarative/scripted pipeline.
If you're using a freestyle pipeline: On "Build Environment", click on "Use secret text(s) or file(s)" and follow the next steps. After that, you're gonna have your credentials as variables in your pipeline;
If you're using a declarative/scripted pipeline: Enclose your aws calls with a withAWS block, something like this:
withAWS(region: 'us-east-1', credentials: 'my-pretty-credentials') {
// let's explode something
}
Best regards.

Jenkins: Set job timeout from a variable

I want my Jenkins' job to use timeout value from the build variable.
Tried the following but ended up with Java runtime error.
Note: I am triggering my jobs using the rest api.

When deploying a job, is it possible that the command returns after the deployment is complete?

I'm using maven to deploy my jobs on Google Cloud Dataflow, with the folowing command :
mvn compile exec:java -Dexec.mainClass=org.beam.StreamerRunner --Dexec.args="\
...
--runner=DataflowRunner \
..."
It deploys successfully, and it is pulling the log from the dataflow job and printing them on the output. I'm wondering if it is possible to tell the deployment to not pull and just returns.
Indeed, my CI tool (TeamCity) I'm using to deploy my job, is also waiting never ending.
I obviously can run the maven command in a nohup, but maybe an option does exist to exit the command after the deploy is complete.
As Alex pointed out I was calling waitUntilFinish in my code, so it dit exactly what I asked it to do.
It was fixed as soon as I removed the calle to
waitUntilFinish()

How to get job variables injected into the docker execution?

I wonder if this is already part of the system...
I need to use the current gitlab user id and email ($GITLAB_USER_ID, $GITLAB_USER_EMAIL) injected into the execution of the docker image (to later configure the git repository).
Is there a magic way to do this ? or should I explicitly write the export commands into my .gitlab-ci.yml file (as a before_script for example) ?
Thanks.
I got my response by trying the env command on a build.
So yes every job variables are available into the docker execution env.

Feed data from parameterized trigger step into execute shell step

I have a requirement to Serverspec test a Cloudformation stack that is created by a Jenkins job called "Create_Stack".
A second Jenkins job will call the existing Create_Stack job via a Parameterized Trigger, and then in a subsequent Execute Shell step execute the Serverspec test suite.
However, in order to do that, the Execute Shell step needs to know the Cloudformation Stack Name.
At the moment, the Stack Name exists as an Environment Variable in the Create_Stack job, and it also exists in the archived artifact file containing the returned output from the aws cloudformation create-stack command.
I have considered looking up Stack Name in the artifact file in $WORKSPACE/../../Create_Stack/workspace/my-stack-output.json. This isn't ideal as it's awkward, and also vulnerable to a race condition if someone were to run this job again immediately while my test was running.
Is there a clean way to make the Stack Name available to subsequent Execute Shell build steps?
At some point I figured out that the EnvInject Plugin can be used to solve this problem.

Resources