I have setup 2 gradle tasks, each for one environment (e.g. DEV and PROD) in build.gradle file.
plugins {
id "com.google.cloud.tools.jib" version "2.4.0"
}
version = 0.1
group = my.company
jib {
from {
image = "openjdk:14-slim"
}
to {
image = "my-registry.some-provider.com/my-app"
tags = [version, 'latest']
}
container {
mainClass = "${group}.Application"
jvmFlags = ["-Xms${findProperty('MEMORY')?:'256'}m", '-Xdebug']
ports = ['80']
volumes = ['/data']
environment = [
'VERSION': version
'DATA_DIR': '/data',
'APPLICATION_PORT' : '80',
'DEVELOPMENT_MODE' : 'false'
]
}
}
task prodtask(type: sometask!) { }
task devtask(type: sometask2) { }
I run these tasks in command line with gradlew command,
./gradlew i devtask (in DEV) and
./gradlew i prodtask (in PROD)
How to run these 2 tasks separately in dev and prod environment with docker image built using JIB plugin?
Related
I am learning how to deploy AWS Lambda functions from Jenkins.
I have following forlder structure:
src -> favorites -> findAll -> index.js
src -> favorites -> insert -> index.js
src -> movies -> findAll -> index.js
src -> movies -> findOne -> index.js
Essentially 4x functions.
Here's part of Jenkinsfile:
def functions = ['MoviesStoreListMovies', 'MoviesStoreSearchMovie', MoviesStoreViewFavorites', 'MoviesStoreAddToFavorites']
stage('Build'){
sh """
docker build -t ${imageName} .
containerName=\$(docker run -d ${imageName})
docker cp \$containerName:/app/node_modules node_modules
docker rm -f \$containerName
zip -r ${commitID()}.zip node_modules src
"""
}
stage('Push'){
functions.each { function ->
sh "aws s3 cp ${commitID()}.zip s3://${bucket}/${function}/"
}
At the end I expect to have 4x AWS S3 buckets with same .zip content in it (i.e. all 4 same folders/functions present in each bucket).
Here now my issue. The build stage.
stage('Deploy'){
functions.each {
function ->
sh "aws lambda update-function-code --function-name ${function} --s3-bucket
${bucket} --s3-key ${function}/${commitID()}.zip --region ${region}"
}
}
Since the zip has same content, how can be that the 4 functions are deployed as exactly 4 functions? Again, each of the 4x .zip file contains in turn the same 4x folders/functions. So I would expect 4x4=16 functions eventually.
What am I missing?
Maybe my mistake. I missed to say that Lambda functions get created with Terraform first. And indeed Terraform creates handlers pointing to the right src paths:
module "MoviesStoreListMovies" {
source = "./modules/function"
name = "MoviesStoreListMovies"
handler = "src/movies/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreSearchMovie" {
source = "./modules/function"
name = "MoviesStoreSearchMovie"
handler = "src/movies/findOne/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreViewFavorites" {
source = "./modules/function"
name = "MoviesStoreViewFavorites"
handler = "src/favorites/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}
module "MoviesStoreAddToFavorites" {
source = "./modules/function"
name = "MoviesStoreAddToFavorites"
handler = "src/favorites/insert/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}
I have a jenkins pipeline script that runs various test suites in parallel over multiple nodes. I'm not a jenkins expert - most of this is copy and paste from other jobs we have.
I am occasionally getting failures where the archiveArtefacts command has somehow gotten the wrong 'tar_file' variable.
Normally, the suite name is built into the tar file of logs, then the tar file is archived to Jenkins, but for some runs, the tar file gets created with one suite name, then the archive step has a different name (I've seen runs where 2 or 3 of the parallel steps fail with this sort of error, and some where only 1 fails).
So somehow, between sh("tar -czf ${tar_file} ${host_logs}/*") and archiveArtifacts artifacts: tar_file the value of tar_file has changed to that of a different suite.
Any thoughts on how I can change this so that the tar_file stays constant in each step?
try {
stage('cuke_regression') {
def stepsForCuke = [:]
stepsForCuke['cuke_api'] = sectionCukeRegressionTests("api")
stepsForCuke['cuke_admin'] = sectionCukeRegressionTests("admin")
stepsForCuke['cuke_notification'] = sectionCukeRegressionTests("notification")
stepsForCuke['cuke_public'] = sectionCukeRegressionTests("public")
stepsForCuke['cuke_project'] = sectionCukeRegressionTests("project")
stepsForCuke['cuke_group'] = sectionCukeRegressionTests("group")
stepsForCuke['cuke_review'] = sectionCukeRegressionTests("review")
stepsForCuke['cuke_workflow'] = sectionCukeRegressionTests("workflow")
stepsForCuke['cuke_review_comment'] = sectionCukeRegressionTests("review_comment")
parallel stepsForCuke
}
}
def sectionCukeRegressionTests(suite) {
section = 'cukes'
return {
section: {
node('docker-node') {
tar_file = "cukes-"+suite+"-logs.tgz "
try {
sh("docker-compose exec ... run the tests \"")
sh("tar -czf ${tar_file} ${host_logs}/*")
} finally {
sh("docker-compose down")
archiveArtifacts artifacts: tar_file
}
}
}
}
}
Dibakar's suggestion of making the tar_file variable a 'def' variable has fixed the problem for me.
I am trying to choose a different docker agent from a private container registry based on an a parameter in Jenkins pipeline. For my example let's say I have 'credsProd' and 'credsTest' saved in the credentials store. My attempt is as follows:
pipeline {
parameters {
choice(
name: 'registrySelection',
choices: ['TEST', 'PROD'],
description: 'Is this a deployment to STAGING or PRODUCTION environment?'
)
}
environment {
URL_VAR = "${env.registrySelection == "PROD" ? "urlProd.azure.io" : "urlTest.azure.io"}"
CREDS_VAR = "${env.registrySelection == "PROD" ? "credsProd" : "credsTest"}"
}
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://${env.URL_VAR}"
registryCredentialsId "${env.CREDS_VAR}"
}
}
stages{
stage('test'){
steps{
echo "${env.URL_VAR}"
echo "${env.CREDS_VAR}"
}
}
}
}
I get error:
Error response from daemon: Get https://null/v2/: dial tcp: lookup null on
If I hard code the registryUrl I get a similar issue with registryCredentialsId:
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://urlTest.azure.io"
registryCredentialsId "${env.CREDS_VAR}"
}
}
ERROR: Could not find credentials matching null
It is successful if I hardcode both registryUrl and registryCredentialsId.
agent {
docker {
image "${env.URL_VAR}/image:tag"
registryUrl "https://urlTest.azure.io"
registryCredentialsId "credsTest"
}
}
It appears that the docker login stage of the agent{docker{}} cannot access/resolve environment variables.
Is there a way around this that does not involve code duplication? I manage changes with multi branch pipeline so ideally do not want to have separate Prod and test groovy files or different sets sequential steps in the same file.
Try running a scripted pipeline before declarative:
URL_VAR = null
CREDS_VAR = null
node('master') {
stage('Choose') {
URL_VAR = params.registrySelection == "PROD" ? "urlProd.azure.io" : "urlTest.azure.io"
CREDS_VAR = params.registrySelection == "PROD" ? "credsProd" : "credsTest"
}
}
pipeline {
agent {
docker {
image "${URL_VAR}/image:tag"
registryUrl "https://${URL_VAR}"
registryCredentialsId "${CREDS_VAR}"
}
}
...
Alternatively, you can define two stages (with hard-coded url and creds) but run only one of them, using when in both.
I am new to using gradle scripts. I have a build.gradle file that I want to split into two files. I get the following two files once I split the larger build.gradle file.
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myImage'
project.ext.tagName ='myTag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'dockerapp.gradle'
dockerapp.gradle
def gradleDockerVersion = '3.7.2'
buildscript {
repositories {
jcenter()
}
dependencies {
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'com.bmuschko.docker-remote-api'
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage
import com.bmuschko.gradle.docker.tasks.image.DockerRemoveImage
import com.bmuschko.gradle.docker.tasks.image.Dockerfile
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
If I try to run the command ./gradlew createAppImage I get an error as follows:
The other two tasks within dockerapp.gradle file seem to work without issues. If I place all my code within the build.gradle file, it works properly without giving any errors. What is the best way to split the files and execute createAppImage without running into errors?
I was able to resolve this with help from CDancy (maintainer of the plugin) as follows:
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myimage'
project.ext.tagName ='mytag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'docker.gradle'
docker.gradle
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-docker-plugin:3.2.7'
}
}
repositories {
jcenter()
}
// use fully qualified class name
apply plugin: com.bmuschko.gradle.docker.DockerRemoteApiPlugin
// import task classes
import com.bmuschko.gradle.docker.tasks.image.*
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
} else {
print exception
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
The change was basically in my docker.gradle file , I had to add the repositories section that pointed to jcenter() and to use fully qualified plugin class name.
I wonder why would you ever want to use Docker Plugins in your Gradle? After playing some time with the most maintained Mushko's Gradle Docker Plugin I don't undersand why should one build plugin boilerplate over docker boilerplate both in varying syntax?
Containerizing in Gradle is comfortable for programmers and DevOps would say they don't work Docker like this and you should go their way if you need support. I found a solution below that is short, pure Docker-style and functionally full.
Find normal Dockerfile and docker.sh shell script in the project structure:
In your build.gradle file:
buildscript {
repositories {
gradlePluginPortal()
}
}
plugins {
id 'java'
id 'application'
}
// Entrypoint:
jar {
manifest {
attributes 'Main-Class': 'com.example.Test'
}
}
mainClassName = 'com.example.Test'
// Run in DOCKER-container:
task runInDockerContainer {
doLast {
exec {
workingDir '.' // Relative to project's root folder.
commandLine 'sh', './build/resources/main/docker.sh'
}
// exec {
// workingDir "."
// commandLine 'sh', './build/resources/main/other.sh'
// }
}
}
Do anything you want in a clean shell script following normal Docker docs:
# Everything is run relative to project's root folder:
# Put everything together into build/docker folder for DOCKER-build:
if [ -d "build/docker" ]; then rm -rf build/docker; fi;
mkdir build/docker;
cp build/libs/test.jar build/docker/test.jar;
cp build/resources/main/Dockerfile build/docker/Dockerfile;
# Build image from files in build/docker:
cd build/docker || exit;
docker build . -t test;
# Run container based on image:
echo $PWD
docker run test;
I have a DSL script that creates a job. As soon as I run the job the config.xml is changed. Because of this, the job doesn't get an update when I run the seed job again.
I suspect some plugins do this. Can you tell me the best way to find out what changes the config when the job is run?
[
[name: "Sonar/co", repo: "repo.git", pomPath: "pom.xml", branch: "development", mvnGoal: "-am -P dev -pl project clean test"]
].each { Map config ->
mavenJob(config.name) {
description "Sonar job für ${config.name}"
logRotator {
numToKeep(1)
}
label "sonar"
scm {
git {
branch "*/${config.branch}"
remote {
url "git#repository:${config.repo}"
}
browser {
gitLab("https://gitlab.DOMAIN.de/", "9.0")
}
}
}
mavenInstallation "maven339"
goals config.mvnGoal
rootPOM config.pomPath
configure { node ->
node / settings (class: 'jenkins.mvn.DefaultSettingsProvider') {
}
node / globalSettings (class: 'jenkins.mvn.DefaultGlobalSettingsProvider') {
}
}
}
}