Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve - maven-3

I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..

Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache

Related

Cannot mount local config for maven in docker multi-stage builds

I using Jenkins with Kubernetes slave. (Kubernetes plugin)
I want to build docker multi-stage. In maven build state I want to use local repository. (config in settings.xml)
I already created configmap on K8S and mounted while running build job.
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
volumes:
- name: docker-socket
emptyDir: {}
- configMap:
defaultMode: 420
name: nexus-xml-test
name: config-vol
containers:
- name: docker-pod
image: docker:19.03.1
command:
- cat
tty: true
volumeMounts:
- name: docker-socket
mountPath: /var/run
- mountPath: /root/.m2
name: config-vol
- name: docker-daemon
image: docker:19.03.1-dind
securityContext:
privileged: true
volumeMounts:
- name: docker-socket
mountPath: /var/run
'''
}
}
And I Already verify that settings.xml is already mounted.
2022-09-13 17:27:54 + cd /root/.m2
2022-09-13 17:27:54 + ls
2022-09-13 17:27:54 settings.xml
In dockerfile. I added this command.
RUN mvn -s /root/.m2/settings.xml
But when i build. It cannot find settings.xml
2022-09-13 17:45:48 Step 2/10 : RUN mvn -s /root/.m2/settings.xml
2022-09-13 17:45:51 ---> Running in e779f9fcf9b6
2022-09-13 17:45:52 [ERROR] Error executing Maven.
2022-09-13 17:45:52 [ERROR] The specified user settings file does not exist: /root/.m2/settings.xml
2022-09-13 17:45:52 The command '/bin/sh -c mvn -s /root/.m2/settings.xml' returned a non-zero code: 1
Please help suggest.

skaffold - the custom script didn't produce an image with tag

docker custom docker/buildx.sh
docker buildx create --name awear-builder --platform $platforms --driver-opt=network=host
docker buildx build --builder awear-builder --tag $IMAGE --platform linux/arm64 --push -f ./docker/Dockerfile .
skaffodl
apiVersion: skaffold/v2beta19
kind: Config
metadata:
name: micro-one
build:
artifacts:
- image: localhost:5000/micro-one
context: .
custom:
buildCommand: sh docker/buildx.sh
dependencies:
paths:
- docker/buildx.sh
- src/*
tagPolicy:
sha256: {}
local:
push: false
deploy:
kustomize:
paths: ["k8s/overlays/dev/"]
# kubectl:
# manifests:
# - deployment.yaml
portForward:
- resourceType: service
resourceName: micro-one
namespace: default
port: 80
localPort: 8080
profiles:
- name: test
build:
local: {}
Error
the custom script didn't produce an image with tag [localhost:5000/micro-one:latest]
following did the trick
docker buildx prune

Fixing 'invalid reference format' error in docker-image-resource put

Currently trying to build and push docker images, issue is that I'm receiving a this message from concourse during wordpress-release put step:
waiting for docker to come up...
invalid reference format
Here's the important bit of the Concourse Pipeline.
- name: wordpress-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/wordpress-release
aws_access_key_id: #############
aws_secret_access_key: #############
- name: mysql-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/mysql-release
aws_access_key_id: #############
aws_secret_access_key: #############
jobs:
- name: job-hello-world
plan:
- get: wordpress-website
- task: write-release-tag
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine/git }
inputs:
- name: wordpress-website
outputs:
- name: tags
run:
dir: wordpress-website
path: sh
args:
- -exc
- |
printf $(basename $(git remote get-url origin) | sed 's/\.[^.]*$/-/')$(git tag --points-at HEAD) > ../tags/release-tag
- put: wordpress-release
params:
build: ./wordpress-website/.
dockerfile: wordpress-website/shared-wordpress-images/wordpress/wordpress-release/Dockerfile
tag_file: tags/release-tag
- put: mysql-release
params:
build: ./wordpress-website/
dockerfile: wordpress-website/shared-wordpress-images/db/mysql-release/Dockerfile
tag_file: tags/release-tag
Those images contain FROM #############.dkr.ecr.eu-west-1.amazonaws.com/shared-mysql (and shared-wordpress) could this be an issue?
The tag_file: tags/release-tag, doesn't seem to be the issue as even without it, this still happens.
This is Concourse 5.0 running on top of Docker in Windows 10.
Any thoughts?

error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile

apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=dir:///workspace"
- "--dockerfile=/workspace/Dockerfile"
- "--destination=gcr.io/kubernetsjenkins/jenkinsondoc:latest"
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: context
mountPath: /workspace
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
- name: context
hostPath:
path: /home/sabadsulla/kanikodir
I am running kaniko on a kubernetes pod to build a docker image and pushing to the GCR.
When i use google cloud storage for the CONTEXT_PATH it works fine ,
But i need to use the Local_directory(meaning using the shared volumes of the pods) AS the CONTEXT_PATH
it throws an error
"Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Usage:
I tried with args "--context=/workspace" , "--context=dir://workspace" , it gives the same error
the folder looks like
In host:
/home/sabadsulla/kanikodir/Dockerfile
When it turns to PV/PVC, in pod container
/workspace/Dockerfile
Then for kanino executor, if we map the context to workspace, the dockerfile will be related to context is Dockerfile, so
--context=/workspace
--dockerfile=Dockerfile
using kaniko container and volume mounted as persistent volume claim.
Please try and use"--dockerfile=./Dockerfile"
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=./Dockerfile",
"--context=/workspace/",
"--destination=gcr.io/kubernetsjenkins/jenkinsondoc:latest"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: context
mountPath: /workspace/
Using the default values:
--dockerfile string -Path to the dockerfile to be built. (default "Dockerfile")
--context string -Path to the dockerfile build context. (default "/workspace/")
Even this one statement works:
args: ["--destination=gcr.io/kubernetsjenkins/jenkinsondoc:latest"]
Hope this help. Could you please test it and share with the results?.
hi i just solved this problem.
my node name: m1.env.lab.io
my Dockerfile path: /root/kaniko/demo1/Dockerfile
FROM ubuntu
ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
pod.yaml in /root/kaniko/demo1/pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
namespace: kaniko
spec:
nodeName: m1.env.lab.io
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--verbosity=trace",
"--log-format=color",
"--dockerfile=Dockerfile",
"--context=dir:///workspace/",
"--destination=registry.local/cloud2go/kaniko-ubuntu:v0.1"] # no account and password for my registry.
volumeMounts:
- name: dockerfile-storage
mountPath: /workspace/
restartPolicy: Never
volumes:
- name: dockerfile-storage
hostPath:
path: /root/kaniko/demo1

Jenkins installation automation

Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo

Resources