Nested deployment of AWS Lambda funtions with Jenkins - jenkins

I am learning how to deploy AWS Lambda functions from Jenkins.
I have following forlder structure:
src -> favorites -> findAll -> index.js
src -> favorites -> insert -> index.js
src -> movies -> findAll -> index.js
src -> movies -> findOne -> index.js
Essentially 4x functions.
Here's part of Jenkinsfile:
def functions = ['MoviesStoreListMovies', 'MoviesStoreSearchMovie', MoviesStoreViewFavorites', 'MoviesStoreAddToFavorites']
stage('Build'){
sh """
docker build -t ${imageName} .
containerName=\$(docker run -d ${imageName})
docker cp \$containerName:/app/node_modules node_modules
docker rm -f \$containerName
zip -r ${commitID()}.zip node_modules src
"""
}
stage('Push'){
functions.each { function ->
sh "aws s3 cp ${commitID()}.zip s3://${bucket}/${function}/"
}
At the end I expect to have 4x AWS S3 buckets with same .zip content in it (i.e. all 4 same folders/functions present in each bucket).
Here now my issue. The build stage.
stage('Deploy'){
functions.each {
function ->
sh "aws lambda update-function-code --function-name ${function} --s3-bucket
${bucket} --s3-key ${function}/${commitID()}.zip --region ${region}"
}
}
Since the zip has same content, how can be that the 4 functions are deployed as exactly 4 functions? Again, each of the 4x .zip file contains in turn the same 4x folders/functions. So I would expect 4x4=16 functions eventually.
What am I missing?

Maybe my mistake. I missed to say that Lambda functions get created with Terraform first. And indeed Terraform creates handlers pointing to the right src paths:
module "MoviesStoreListMovies" {
source = "./modules/function"
name = "MoviesStoreListMovies"
handler = "src/movies/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreSearchMovie" {
source = "./modules/function"
name = "MoviesStoreSearchMovie"
handler = "src/movies/findOne/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreViewFavorites" {
source = "./modules/function"
name = "MoviesStoreViewFavorites"
handler = "src/favorites/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}
module "MoviesStoreAddToFavorites" {
source = "./modules/function"
name = "MoviesStoreAddToFavorites"
handler = "src/favorites/insert/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}

Related

Jenkins groovy - copy file based on string in params

I have a Jenkins job where that needs to copy a file to a specific server per user choice. Till today all was working since I needed to copy the same file to the server that the user choose.
Now I need to copy a specific file per server. I n case the user chooses to deploy Server_lab1-1.1.1.1 so lab1.file.conf file should be copied. in case the user chooses to deploy Server_lab2-2.2.2.2 , lab2.file.conf should be copied.
I’m guessing that I need to add to the function
Check if the Server parameter includes lab1 if so, copy lab1.file.conf file and if the Server parameter includes lab2 if so, copy lab2.file.conf file
parameters {
extendedChoice(name: 'Servers', description: 'Select servers for deployment', multiSelectDelimiter: ',',
type: 'PT_CHECKBOX', value: 'Server_lab1-1.1.1.1, Server_lab2-2.2.2.2', visibleItemCount: 5)
stage ('Copy nifi.flow.properties file') {
steps { copy_file() } }
def copy_file() {
params.Servers.split(',').each { item -> server = item.split('-').last()
sh "scp **lab1.file.conf or lab2.file.conf** ${ssh_user_name}#${server}:${spath}"
}
}
Are you looking for something like below.
def copy_file() {
params.Servers.split(',').each { item ->
def server = item.split('-').last()
def fileName = item.contains('lab1') ? 'lab1.file' : 'lab2.file'
sh "scp ${fileName} ${ssh_user_name}#${server}:${spath}"
}
}
Update Classic if-else
def copy_file() {
params.Servers.split(',').each { item ->
def server = item.split('-').last()
def fileName = "default"
if( item.contains('lab1')) {
fileName = 'lab1.file'
} else if(item.contains('lab2')) {
fileName = 'lab2.file'
} else if(item.contains('lab3')) {
fileName = 'lab3.file'
}
sh "scp ${fileName} ${ssh_user_name}#${server}:${spath}"
}
}

Jenkins pipeline stage - loop over sub directories of target directory

I'm wanting to iterate over the contents of a folder that will contain a bunch of subdirectories so that I can run shell commands on each one.
I'm just trying to prove I can access the contents of the directory, and have this so far:
stage('Publish Libs') {
when {
branch productionBranch
}
steps {
echo "Publish Libs"
dir('dist/libs') {
def files = findFiles()
files.each{ f ->
if(f.directory) {
echo "This is directory: ${f.name} "
}
}
}
}
}
But getting this error
org.jenkinsci.plugins.workflow.cps.CpsCompilationErrorsException: startup failed:
/var/lib/jenkins/jobs/al-magma/branches/master/builds/5/libs/o3-app-pipeline/vars/magmaPipeline.groovy: 178: Expected a step # line 178, column 25.
def files = findFiles()
What's the correct syntax here please?
You code works great, thanks! Just needed to enclose it in a script block:
stage('Iterate directories') {
steps {
script {
dir('mydir') {
def files = findFiles()
files.each { f ->
if (f.directory) {
echo "This is a directory: ${f.name}"
}
}
}
}
}
}

How to list all directories from within directory in jenkins pipeline script

I want to get all directories present in particular directory from jenkins pipeline script.
How can we do this?
If you want a list of all directories under a specific directory e.g. mydir using Jenkins Utility plugin you can do this:
Assuming mydir is under the current directory:
dir('mydir') {
def files = findFiles()
files.each{ f ->
if(f.directory) {
echo "This is directory: ${f.name} "
}
}
}
Just make sure you do NOT provide glob option. Providing that makes findFiles to return file names only.
More info: https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/
I didn't find any plugin to list folders, so I used sh/bat script in pipeline, and also this will work irrespective of operating system.
pipeline {
stages {
stage('Find all fodlers from given folder') {
steps {
script {
def foldersList = []
def osName = isUnix() ? "UNIX" : "WINDOWS"
echo "osName: " + osName
echo ".... JENKINS_HOME: ${JENKINS_HOME}"
if(isUnix()) {
def output = sh returnStdout: true, script: "ls -l ${JENKINS_HOME} | grep ^d | awk '{print \$9}'"
foldersList = output.tokenize('\n').collect() { it }
} else {
def output = bat returnStdout: true, script: "dir \"${JENKINS_HOME}\" /b /A:D"
foldersList = output.tokenize('\n').collect() { it }
foldersList = foldersList.drop(2)
}
echo ".... " + foldersList
}
}
}
}
}
I haven't tried this, but I would look at the findFiles step provided by the Jenkins Pipeline Utility Steps Plugin and set glob to an ant-style directory patter, something like '**/*/'
If you just want to log them, use
sh("ls -A1 ${myDir}")
for Linux/Unix. (Note: that's a capital letter A and the number one.)
Or, use
bat("dir /B ${myDir}")
for Windows.
If you want the list of files in a variable, you'll have to use
def dirOutput = sh("ls -A1 ${myDir}", returnStdout: true)
or
def dirOutput = bat("dir /B ${myDir}", returnStdout: true)
and then parse the output.
Recursively getting all the Directores within a directory.
pipeline {
agent any
stages {
stage('Example') {
steps {
script {
def directories = getDirectories("$WORKSPACE")
echo "$directories"
}
}
}
}
}
#NonCPS
def getDirectories(path) {
def dir = new File(path)
def dirs = []
dir.traverse(type: groovy.io.FileType.DIRECTORIES, maxDepth: -1) { d ->
dirs.add(d)
}
return dirs
}
A suggestion for the very end of Jenkinsfile:
post {
always {
echo '\n\n-----\nThis build process has ended.\n\nWorkspace Files:\n'
sh 'find ${WORKSPACE} -type d -print'
}
}
Place the find wherever you think is better. Check more alternatives at here

gradle docker plugin bmuschko Splitting a build.gradle into two files gives error

I am new to using gradle scripts. I have a build.gradle file that I want to split into two files. I get the following two files once I split the larger build.gradle file.
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myImage'
project.ext.tagName ='myTag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'dockerapp.gradle'
dockerapp.gradle
def gradleDockerVersion = '3.7.2'
buildscript {
repositories {
jcenter()
}
dependencies {
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'com.bmuschko.docker-remote-api'
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage
import com.bmuschko.gradle.docker.tasks.image.DockerRemoveImage
import com.bmuschko.gradle.docker.tasks.image.Dockerfile
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
If I try to run the command ./gradlew createAppImage I get an error as follows:
The other two tasks within dockerapp.gradle file seem to work without issues. If I place all my code within the build.gradle file, it works properly without giving any errors. What is the best way to split the files and execute createAppImage without running into errors?
I was able to resolve this with help from CDancy (maintainer of the plugin) as follows:
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myimage'
project.ext.tagName ='mytag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'docker.gradle'
docker.gradle
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-docker-plugin:3.2.7'
}
}
repositories {
jcenter()
}
// use fully qualified class name
apply plugin: com.bmuschko.gradle.docker.DockerRemoteApiPlugin
// import task classes
import com.bmuschko.gradle.docker.tasks.image.*
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
} else {
print exception
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
The change was basically in my docker.gradle file , I had to add the repositories section that pointed to jcenter() and to use fully qualified plugin class name.
I wonder why would you ever want to use Docker Plugins in your Gradle? After playing some time with the most maintained Mushko's Gradle Docker Plugin I don't undersand why should one build plugin boilerplate over docker boilerplate both in varying syntax?
Containerizing in Gradle is comfortable for programmers and DevOps would say they don't work Docker like this and you should go their way if you need support. I found a solution below that is short, pure Docker-style and functionally full.
Find normal Dockerfile and docker.sh shell script in the project structure:
In your build.gradle file:
buildscript {
repositories {
gradlePluginPortal()
}
}
plugins {
id 'java'
id 'application'
}
// Entrypoint:
jar {
manifest {
attributes 'Main-Class': 'com.example.Test'
}
}
mainClassName = 'com.example.Test'
// Run in DOCKER-container:
task runInDockerContainer {
doLast {
exec {
workingDir '.' // Relative to project's root folder.
commandLine 'sh', './build/resources/main/docker.sh'
}
// exec {
// workingDir "."
// commandLine 'sh', './build/resources/main/other.sh'
// }
}
}
Do anything you want in a clean shell script following normal Docker docs:
# Everything is run relative to project's root folder:
# Put everything together into build/docker folder for DOCKER-build:
if [ -d "build/docker" ]; then rm -rf build/docker; fi;
mkdir build/docker;
cp build/libs/test.jar build/docker/test.jar;
cp build/resources/main/Dockerfile build/docker/Dockerfile;
# Build image from files in build/docker:
cd build/docker || exit;
docker build . -t test;
# Run container based on image:
echo $PWD
docker run test;

Use files as input to Jenkins JobDSL

I am trying to use Jenkins' JobDSL plugin to programatically create jobs. However, I want to be able to define the parameters in a file. According to docs on distributed builds, this may not be possible. Anyone have any idea how I can achieve this? I could use the readFileFromWorkspace method but I still need to iterate over all files provided and run JobDSL x times. JobDSL code below. The important part I am struggling with is the first 15 lines or so.
#!groovy
import groovy.io.FileType
def list = []
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
def dir = new File(workspace.getRemote() + "/pipeline/applications")
dir.eachFile (FileType.FILES) { file ->
list << file
}
list.each {
println (it.path)
def properties = new Properties()
this.getClass().getResource( it.path ).withInputStream {
properties.load(it)
}
def _git_key_id = 'jenkins'
consumablesRoot = '//pipeline_test'
application_folder = "${consumablesRoot}/" + properties._application_name
// Create the branch_indexer
def jobName = "${application_folder}/branch_indexer"
folder(consumablesRoot) {
description("Ensure consumables folder is in place")
}
folder(application_folder) {
description("Ensure app folder in consumables spaces is in place.")
}
job(jobName) {
println("in the branch_indexer: ${GIT_BRANCH}")
label('master')
/* environmentVariables(
__pipeline_code_repo: properties."__pipeline_code_repo",
__pipeline_code_branch: properties."__pipeline_code_branch",
__pipeline_scripts_code_repo: properties."__pipeline_scripts_code_repo",
__pipeline_scripts_code_branch: properties."__pipeline_scripts_code_branch",
__gcp_template_code_repo: properties."__gcp_template_code_repo",
__gcp_template_code_branch: properties."__gcp_template_code_branch",
_git_key_id: _git_key_id,
_application_id: properties."_application_id",
_application_name: properties."_application_name",
_business_mnemonic: properties."_business_mnemonic",
_control_repo: properties."_control_repo",
_project_name: properties."_project_name"
)*/
scm {
git {
remote {
url(control_repo)
name('control_repo')
credentials(_git_key_id)
}
remote {
url(pipeline_code_repo)
name('pipeline_pipelines')
credentials(_git_key_id)
}
}
}
triggers {
scm('#daily')
}
steps {
//ensure that the latest code from the pipeline source code repo has been pulled
shell("git ls-remote --heads control_repo | cut -d'/' -f3 | sort > .branches")
shell("git checkout -f pipeline_pipelines/" + properties."pipeline_code_branch")
//get the last branch from the control_repo repo
shell("""
git for-each-ref --sort=-committerdate refs/remotes | grep -i control_repo | head -n 1 > .last_branch
""")
dsl(['pipeline/branch_indexer.groovy'])
}
}
// Start the branch_indexer
queue(jobName)
}
In case someone else ends up here in search of a simple method for reading only one parameter file, use readFileFromWorkspace (as mentioned by #CodyK):
def file = readFileFromWorkspace(relative_path_to_file)
If the file contains a parameter called your_param, you can read it using ConfigSlurper:
def config = new ConfigSlurper().parse(file)
def your_param = config.getProperty("your_param")
Was able to get it working with this piece of code:
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
// Build a list of all config files ending in .properties
def cwd = hudson.model.Executor.currentExecutor().getCurrentWorkspace().absolutize()
def configFiles = new FilePath(cwd, 'pipeline/applications').list('*.properties')
configFiles.each { file ->
def properties = new Properties()
def content = readFileFromWorkspace(file.getRemote())
properties.load(new StringReader(content))

Resources