How to configure tests stage in the codepipeline to use BatchBuild from Codebuild and create multiple instances - aws-cdk

I am using CDK based code pipeline from https://github.com/awslabs/aws-simple-cicd . I would like to enable batchbuild in the test stage as per https://docs.cypress.io/guides/continuous-integration/aws-codebuild#Using-the-Cypress-Dashboard-with-AWS-CodeBuild
How can batchbuild be enabled in CDK codepipeline to have tests run in parallel?

As far as I see BatchBuild can be enabled in the CodeBuildAction:
const testAction = new CodeBuildAction ({
actionName: 'Test',
outputs: [testOutputArtifact],
input: buildOutputArtifact,
project: testProject,
executeBatchBuild: true
})
The docs states:
/**
* Trigger a batch build.
*
* Enabling this will enable batch builds on the CodeBuild project.
*
* #default false
*/
readonly executeBatchBuild?: boolean;
If You have any commands to run then it is possible to create pre/post-StageSteps.
I would also recommend You to use the simple aws-cdk-lib.pipelines module instead of aws-cdk-lib.aws_codepipeline module.

Related

aws_cdk events rule target for cdk pipelines fails

below error pops when I try to target a CDK pipeline using events targets.
jsii.errors.JavaScriptError:
Error: Resolution error: Supplied properties not correct for "CfnRuleProps"
targets: element 0: supplied properties not correct for "TargetProperty"
arn: required but missing.
code is below
from aws_cdk import (
core,
aws_codecommit as codecommit,
aws_codepipeline as codepipeline,
aws_events as events,
aws_events_targets as targets
)
from aws_cdk import pipelines
from aws_cdk.pipelines import CodePipeline, CodePipelineSource, ShellStep
class BootStrappingStack(core.Stack):
def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs
repo = codecommit.Repository(
self, 'Repository',
repository_name='Repository'
)
source_artifact = codepipeline.Artifact()
cloud_assembly_artifact = codepipeline.Artifact()
pipeline = CodePipeline(self, 'Pipeline',
synth=ShellStep("Synth",
input=CodePipelineSource.code_commit(
repository=repo,
branch='master'),
commands=[
'pip install -r requirements',
'npm install -g aws-cdk',
'cdk synth']
)
)
rule = events.Rule(self, 'TriggerPipeline',
schedule=events.Schedule.expression('rate(1 hour)')
)
rule.add_target(targets.CodePipeline(pipeline))
The documentation for aws_cdk.events_targets works with codepipeline construct, however doesn't work as documented for cdk pipelines.
This needs to be addressed in the documentation, once I get to know what's the fix. Please help.
As mentioned by #Otavio, you need to use the codepipeline.IPipeline.
You can use the pipeline property from CDK CodePipeline construct but in order to use that first you need to construct the pipeline using build_pipeline() method:
pipeline = CodePipeline(self, 'Pipeline',
synth=ShellStep("Synth",
input=CodePipelineSource.code_commit(
repository=repo,
branch='master'),
commands=[
'pip install -r requirements',
'npm install -g aws-cdk',
'cdk synth']
)
)
# You need to construct the pipeline before passing it as a target in rule
pipeline.build_pipeline()
rule = events.Rule(self, 'TriggerPipeline',
schedule=events.Schedule.expression('rate(1 hour)')
)
# Using the pipeline property from CDK Codepipeline
rule.add_target(targets.CodePipeline(pipeline.pipeline))
The problem is that targets.CodePipeline receives a codepipeline.IPipeline as a parameter. But what you are using instead is a pipelines.CodePipeline, which is a different thing. CodePipeline is more abstract construct, built on top of the codepipeline module.
You can try this:
const pipeline = new CodePipeline(self, 'Pipeline' ....
Then:
rule.addTarget(new targets.CodePipeline(pipeline))

How can I import a Role to use to create CodeBuild project in AWS CDK?

I would like to import a Role in CDK to use it to configure a new CodeBuild project.
const role1 = Role.import(this, "service-role", {
roleArn: "arn:aws:iam::1234567890:role/service-role/codebuild-manualproject-service-role" ,
});
NOTE: I change the Account Ref in the Arn for security.
Here is the code for creating the codebuild project ..
// Create build
// note: now that there is one bucket per build project it is not necessary to have a prefix/subdirectory in the bucket for storing cache items
const project = new codebuild.Project(this, CODEBUILD_PROJECT_NAME, {
// render url for build badge
badge: true,
cacheBucket: cacheBucket,
cacheDir: "Cache",
description: "An AWS codebuild project for repo: " + REPO + ", created: " + Date(),
// note will use standard buildspec.yml
// set the environment
environment: {
buildImage: CODEBUILD_IMAGE,
computeType: CODEBUILD_COMPUTETYPE,
},
projectName: CODEBUILD_PROJECT_NAME,
role: role1,
// add in source control
source: source,
// set timeout - mins
timeout: 30,
vpc: vpc,
securityGroups: [securityGroup]
});
I run ..
npm run build
no errors
I then run
cdk synth
Got the error ..
Error: Validation failed with the following errors:
[Quicktest2Stack/frontend-v3-codebuild/PolicyDocument] Policy must be attached to at least one principal: user, group or role
at Synthesizer.synthesize (C:\scratch\CDKTest\quicktest2\node_modules\#aws-cdk\cdk\lib\synthesis.js:22:23)
at App.run (C:\scratch\CDKTest\quicktest2\node_modules\#aws-cdk\cdk\lib\app.js:44:31)
at process.<anonymous> (C:\scratch\CDKTest\quicktest2\node_modules\#aws-cdk\cdk\lib\app.js:24:51)
at Object.onceWrapper (events.js:284:20)
at process.emit (events.js:196:13)
Subprocess exited with error 1
So if I "import" the role in CDK and attempt to bind it to the new CodeBuild project then it fails (cdk synth)
But it I allow CDK to create it's own role when building the new CodeBuild project then cdk synth works and the stack can be deployed BUT the codebuild project fails.
If I take the CodeBuild project and just manually change the role to the inital one I tried to import - then all works well and can build will run.
I need to be able to create the CodeBuild project and bind to the role in via CDK.
CDK VERSION: 0.28.0
Any help gratefully received.
I found this problem disappeared once I upgraded CDK to 0.31.0.

Get BuildInfo From Artifactory Using Jenkins

Using Jenkins DSL I can create and publish build info using Artifactory.newBuildInfo but am looking for the complementary method to read the BuildInfo JSON data that is generated on Artifactory. Have trolled through many resources. Any suggestions would be appreciated.
From Artifactory REST API it sure looks like you can retrieve buildInfo. I'd expect this must be exposed from the jenkins plugin as well.
Build Info
Description: Build Info
Since: 2.2.0
Security: Requires a privileged user with deploy permissions (can be anonymous)
Usage: GET /api/build/{buildName}/{buildNumber}
Produces: application/vnd.org.jfrog.build.BuildInfo+json
...
JFrog's project examples on github is a fabulous resource as is their jenkins plugin
From a quick search it looks like you'd define a download spec and then use server.download method (see Working with Pipeline Jobs in Jenkins
def buildInfo1 = server.download downloadSpec
The previous answer creates a new buildInfo, it does not download the original buildInfo into I've been trying for days to try to figure out how to do what the original poster wants to do. The best I've succeeded at is downloading the buildinfo into a hashtable, work with that, then upload the changes doing REST calls.
def curlstr = "curl -H 'X-JFrog-Art-Api:${password}' ${arturl}api/build/${buildName}/${buildNumber}"
def buildInfoString = sh(
script: curlstr,
returnStdout: true
).trim()
buildInfo = (new JsonSlurperClassic().parseText(buildInfoString))
sh("echo '${JsonOutput.toJson(buildInfo)}'|curl -XPUT -H 'X-JFrog-Art-Api:${password}' -H 'Content-Type: application/json' ${arturl}api/build --upload-file - ")
I was able to modify the buildInfo in the artifactory repository using this technique. Not as clean as I would like. I've been unable to get the jfrogCLI to modify existing buildInfo files either.
For whatever it's worth the intent of what I'm trying to do is promote a docker artifact and change the name while doing it. There is no way I've found to express this to artifactory not involving downloading the artifact to docker and then pushing it again. I'd love it if someone from #jfrog could clue me in how to do it.
UPDATE: Attention! I've got the question wrong. This is how you get the local BuildInfo-Object in a declarative pipeline script.
I managed this by using an internal api from jenkins-artifactory-plugin.
// found in org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils
/**
* Get build info as defined in previous rtBuildInfo{...} scope.
*
* #param rootWs - Step's root workspace.
* #param build - Step's build.
* #param customBuildName - Step's custom build name if exist.
* #param customBuildNumber - Step's custom build number if exist.
* #return build info object as defined in previous rtBuildInfo{...} scope or a new build info.
*/
public static BuildInfo getBuildInfo(FilePath rootWs, Run<?, ?> build, String customBuildName, String customBuildNumber, String project) throws IOException, InterruptedException {
...
}
Whith this code you can fetch the BuildInfo inside a declarative pipeline script step.
def buildInfo = org.jfrog.hudson.pipeline.declarative.utils.DeclarativePipelineUtils.getBuildInfo(new hudson.FilePath(new java.io.File(env.WORKSPACE)), currentBuild.rawBuild, null, null, null);
UPDATE: Beware of custom build names and numbers. I you have defined a custom build name and/or build number, you have to provide it with the getBuildInfo call.

How to set environment variables when testing DSL scripts against dummy jenkins?

I am trying to automate testing Jenkins groovy dsl scripts, like here:
https://github.com/sheehan/job-dsl-gradle-example
The idea I think is very straight forward, what I'm having issues with is setting environment variables for the dummy Jenkins. I followed the instructions here:
https://wiki.jenkins-ci.org/display/JENKINS/Unit+Test
Specifically "How to set env variables" section and added the following to my test executor:
import hudson.slaves.EnvironmentVariablesNodeProperty
import hudson.EnvVars
/**
* Tests that all dsl scripts in the jobs directory will compile.
*/
class JobScriptsSpec extends Specification {
#Shared
#ClassRule
JenkinsRule jenkinsRule = new JenkinsRule()
EnvironmentVariablesNodeProperty prop = new EnvironmentVariablesNodeProperty();
EnvVars envVars = prop.getEnvVars();
#Unroll
void 'test script #file.name'(File file) {
given:
envVars.put("ENVS", "dev19");
jenkinsRule.jenkins.getGlobalNodeProperties().add(prop);
JobManagement jm = new JenkinsJobManagement(System.out, [:], new File('.'))
when:
new DslScriptLoader(jm).runScript(file.text)
then:
noExceptionThrown()
where:
file << jobFiles
}
However when I run the actual tests for one of the scripts, I still see the following:
Failed tests
test script Build.groovy
Expected no exception to be thrown, but got 'javaposse.jobdsl.dsl.DslScriptException'
at spock.lang.Specification.noExceptionThrown(Specification.java:118)
at com.dslexample.JobScriptsSpec.test script #file.name(JobScriptsSpec.groovy:40)
Caused by: javaposse.jobdsl.dsl.DslScriptException: (script, line 3) No such property: ENVS for class: script
The script Build.groovy uses the variable "${ENVS}" (as if it were provided by parameter in seed job of Jenkins), which works as expected when actually running in Jenkins... So any way to set these "parameters" or env variables in the test jenkins context?
Example of how I use the ENVS variable in the Build.groovy:
def envs = '-'
"${ENVS}".eachLine{
def env = it
envs+=env+'-'
}
envs.substring(0,envs.length()-1)
job('Build'+envs) {
...
}
The second argument of the JenkinsJobManagement constructor is a map of environment variables which will be available in the DSL scripts.
Map<String, String> envVars = [
FOO: 'BAR'
]
JobManagement jm = new JenkinsJobManagement(System.out, envVars, new File('.'))

Add command to Grails build process

I am using the grails-cdn-asset-pipline plugin. I've gone through the installation and configuration steps on GitHub and I reach the usage section which says
Add this command to your build process (usually before war generation and deployment).
// If all the settings are defined in your Config.groovy
grails asset-cdn-push
// Or
grails asset-cdn-push --provider=S3 --directory=my-bucket --gzip=true --storage-path=some-prefix --expires=365 --region=eu-west-1 --access-key=$MY_S3_ACCESS_KEY --secret-key=$MY_S3_SECRET_KEY
Where in my project do I put this command?
Is it something that I can do within the context of my project, or do I need to keep it separate in another build process and run it in an environment like Jenkins?
In _Events.groovy, I tried to invoke the script in the eventCreateWarStart, but I am having no luck there. (Code taken from this question)
eventCreateWarStart = { warName, stagingDir ->
def pluginManager = PluginManagerHolder.pluginManager
def plugin = pluginManager.getGrailsPlugin("cdn-asset-pipline")
def pluginDir = plugin.descriptor.file.parentFile
Map<String, String> env = System.getenv()
final processBuilder = new ProcessBuilder()
processBuilder.directory(new File("${cdnAssetPipelinePluginDir}/scripts"))
processBuilder.command([env['GRAILS_HOME']+"/bin/grails","cdn-asset-push"])
println processBuilder.directory()
Process proc = processBuilder.start()
proc.consumeProcessOutput(out, err)
proc.waitFor()
}
This link explains the run-script functionality which was merged into Grails 1.3.6. But I ran into the same problem of not knowing where to run it automatically.

Resources