"unknown flag: --platform" when use "docker buildx build" in Jenkins pipeline - docker

I am using RedHat-7 system. And I want to Jenkins Pipeline to implement Devops.
But when I use docker buildx build feature, Jenkins says "unknown flag: --platform".
I run my Jenkins with docker image:
docker run -d \
--name jenkins \
--restart=unless-stopped \
-u 0 \
--network jenkins \
-p 8082:8080 \
-p 50000:50000 \
-v /home/ngtl/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-e TZ=Asia/Shanghai \
-e JAVA_OPTS=-Duser.timezone=Asia/Shanghai \
jenkins/jenkins:lts-jdk11
and this is my pipeline:
pipeline {
agent any
tools {
maven 'mvn'
}
environment {
DOCKER_CREDENTIALS = credentials('clouds3n-ldap')
}
stages {
stage('Unit Test') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn clean test -Dmaven.test.failure.ignore=false'
}
}
}
stage('Maven Build') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn package -Dmaven.test.skip -DskipTests'
}
}
}
stage('Sonar Scan') {
steps {
withSonarQubeEnv('sonarqube') {
withMaven(maven: 'mvn') {
script {
def allJob = env.JOB_NAME.tokenize('/') as String[]
def projectName = allJob[0]
sh "mvn sonar:sonar -Dsonar.branch.name=${env.GIT_BRANCH} -Dsonar.projectKey=${projectName} -Dsonar.projectName=${projectName} -Dmaven.test.skip -DskipTests"
}
}
}
}
}
stage('Sonar Gate') {
steps {
timeout(time: 30, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
stage('Docker Build') {
steps {
script {
def allJob = env.JOB_NAME.tokenize('/') as String[]
def projectName = allJob[0]
final noSuffixProjectName = projectName.substring(0, projectName.lastIndexOf('-'))
sh "echo ${DOCKER_CREDENTIALS_PSW} | docker login -u ${DOCKER_CREDENTIALS_USR} 192.168.2.157:8881 --password-stdin"
sh "docker buildx build --platform linux/amd64 -t 192.168.2.157:8881/uni/${noSuffixProjectName}:dev-${BUILD_NUMBER} -f ${env.JENKINS_HOME}/k8s-config/docker/BackendDockerfile . --push"
}
}
}
stage('Maven Deploy') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn deploy -Dmaven.test.skip -DskipTests'
}
}
}
stage('K8s Apply') {
steps {
echo 'not support now, comming soon'
}
}
}
post {
always {
sh 'docker logout 192.168.2.157:8881'
}
cleanup {
cleanWs()
}
success {
echo 'Finished!'
}
}
}
When reach "Docker Build" stage, Jenkins will throw error :
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [DOCKER_CREDENTIALS_PSW]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
+ echo ****
+ docker login -u **** 192.168.2.157:8881 --password-stdin
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] sh
+ docker buildx build --platform linux/amd64 -t 192.168.2.157:8881/uni/cqu:dev-11 -f /var/jenkins_home/k8s-config/docker/BackendDockerfile . --push
unknown flag: --platform
See 'docker --help'.
Why Jenkins pipleline can not use "--platform" options? How to fix this problem ?

Make sure your Jenkins agent (slave) has recent version of docker.
BuildKit has been integrated to docker build since Docker 18.06 . 
In my case version 18.09.6 did not work. 20.10 is good though.

Related

Jenkins - Mark build as success

I use Jenkins to build my maven Java app then create Docker image and push it. After all of that I have try-catch where I Try to stop and remove the container if it's already running - If not it should just skip it and run the new Image - It works but always marks the build as failed. I tried to change the build status, but apparently that is not possible.
Here is my pipeline:
node {
stage('Clone repository') {
git branch: 'main', credentialsId: 'realsnack-git', url: 'https://github.com/Realsnack/Java-rest-api.git'
}
stage('Build maven project') {
sh './mvnw clean package'
}
stage('Build docker image') {
sh 'docker build -t 192.168.1.27:49153/java-restapi:latest .'
}
stage('Push image') {
sh 'docker push 192.168.1.27:49153/java-restapi:latest'
}
try {
stage('Remove old container') {
sh 'docker stop java-rest_api && docker rm java-rest_api'
}
} catch(all) {
sh 'No container to remove - runnning it anyway'
} finally {
stage('Run image') {
sh 'docker run -d --name java-rest_api -p 8081:8081 192.168.1.27:49153/java-restapi:latest'
}
}
}
docker stop will fail if it fails to stop the container.
You can solve the issue in one of the two following ways:
Check that there is a running container before attempting to stop it:
sh "if [[ docker ps -a | grep java-rest_api ]]; docker stop java-rest_api; fi"
Ignore the docker error:
sh "docker stop java-rest_api || true"

How to create new docker container and run it from Jenkinsfile

I've inherited this Jenkinsfile stage that will run a new docker image using withRun:
stage('Deploy') {
steps {
script {
docker.image('deployscript:latest').withRun("""\
-e 'IMAGE=${IMAGE_NAME}:${BUILD_ID}' \
-e 'CNAME=${IMAGE_NAME}' \
-e 'PORT=${PORT_1}:80' \
-e 'PORT=${PORT_2}:443'""") { c ->
sh "docker logs ${c.id}"
}
}
}
}
However, I believe this method is only meant for testing purposes and actually stops the container once the block is finished. I want this step to actually run the container and stop/restart the previous one if necessary. The documentation out there on this is surprisingly sparse. Please help.
If you want to run the docker container throughout all the stages, thenthe example would look like below:
Scripted Pipeline
node('master') {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('alpine:latest').inside {
stage('01') {
sh 'echo STAGE01'
}
stage('02') {
sh 'echo STAGE02'
}
}
}
Declarative Pipeline
pipeline {
agent {
docker {
image 'alpine:latest'
label 'master'
args '-v /tmp:/tmp'
}
}
stages {
stage('01') {
steps {
sh "echo STAGE01"
}
}
stage('02') {
steps {
sh "echo STAGE02"
}
}
}
}
In both scripted and declarative pipelines, The docker container from the alpine image will active for all the stages to finish and only delete if the stage is a success or failure.
But If you would want to control start, stop, restart the container yourself on different stages, you can do it with bash command or by writing a small groovy script wrapping the docker command like below
node {
stage('init') {
docker create --name myImage1 -v $(pwd):/var/jenkins -w /var/jenkins imageName:tag
}
stage('build') {
// make use of docker command to start, stop and execute some script inside the container
// same goes for other stage
//once all done you can remove the container
docker rm myImage1
}
}
The following will stop the existing container and run a new one with the new image:
stage('Deploy') {
steps {
sh "docker stop ${IMAGE_NAME} || true && docker rm ${IMAGE_NAME} || true"
sh "docker run -d \
--name ${IMAGE_NAME} \
--publish ${PORT}:443 \
${IMAGE_NAME}:${BUILD_ID}"
}
}

Docker not running in Jenkins Pipeline

I am running a jenkins docker image by doing this:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v /home/ec2-user/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkins/jenkins:lts
I have my jenkins server up but when I try to run a docker build image as below:
pipeline {
environment{
registry = "leexha/node_demo"
registyCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools{
nodejs "node"
}
stages {
stage('Git clone'){
steps{
git 'https://github.com/leeadh/node-jenkins-app-example.git'
}
}
stage('Installing Node') {
steps {
sh 'npm install'
}
}
stage ('Conducting Unit test'){
steps{
sh 'npm test'
}
}
stage ('Building image'){
steps{
script{
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage ('Pushing to Docker Hub'){
steps{
script{
docker.withRegistry('',registyCredential){
dockerImage.push()
}
}
}
}
}
}
it keeps telling me that dcoker is not found.
I already enabled the docker process to communicate via the -v /var/run/docker.sock:/var/run/docker.sock \
So im pretty confused now whats going on.
ANy help?
You need to install docker on Jenkins Server (insider the Jenkins image container). And install and config Jenkins plugin: docker on your Jenkins Server.

How to fix script.sh: line 1: Builing...: not found problem?

I have used both jenkins/jenkins:latest and jenkinsci/blueocean:latest docker images with pipeline script from SCM settings.
General setting "GitHub project" was enabled with https://github.com/alamsarker/test
Now When I build. its shows the following error:
+ Builing...
/var/jenkins_home/workspace/pipeline-test#tmp/durable-2aac8cac/script.sh: line 1: Builing...: not found
Can you please to fix the issue?
I run docker by:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
My Jenkinsfile is simple as follows:
pipeline {
agent any
stages {
stage('build') {
steps {
sh 'Builing...'
}
}
stage('Test') {
steps {
sh 'Testing...'
}
}
stage('Deploy') {
steps {
sh 'Deploying...'
}
}
}
}
the pipeline step sh is used to execute linux cmd. Building is not a valid linux cmd, that's why you get the error.
If you want to print out some word you can use step echo which is cross-platform or execute the linux cmd: echo via step sh, like sh 'echo Building...' which only work on linux-like agent.
pipeline {
agent any
stages {
stage('build') {
steps {
echo 'Builing...'
}
}
stage('Test') {
steps {
sh 'echo Testing...'
}
}
stage('Deploy') {
steps {
echo 'Deploying...'
}
}
}
}

using jenkins docker plugin for storage persistant containers in a build pipeline

This is the groovy script for a simple build pipeline that uses the docker image of SQL Server on Linux:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('build dacpac') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
sh 'docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux'
}
stage('deploy dacpac') {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
stage('run tests') {
PowerShell('Start-Sleep -s 5')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
I got to this point with some help with a question I posted a day or so ago, this was my attempt (with some help) at doing the same thing but with the docker plugin:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('Build Dacpac from SQLProj') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
docker.image('-e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux').withRun() {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
sh 'docker run -d --name SQLLinuxLocal2 microsoft/mssql-server-linux'
}
stage('sleep') {
PowerShell('Start-Sleep -s 30')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
The problem with this is that although it works, the docker run -d line spins up a different incarnation of the carnation. Could someone please point me in the correct direction as to getting the same result as per the first pipeline but by using the docker plugin.

Resources