I have used both jenkins/jenkins:latest and jenkinsci/blueocean:latest docker images with pipeline script from SCM settings.
General setting "GitHub project" was enabled with https://github.com/alamsarker/test
Now When I build. its shows the following error:
+ Builing...
/var/jenkins_home/workspace/pipeline-test#tmp/durable-2aac8cac/script.sh: line 1: Builing...: not found
Can you please to fix the issue?
I run docker by:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
My Jenkinsfile is simple as follows:
pipeline {
agent any
stages {
stage('build') {
steps {
sh 'Builing...'
}
}
stage('Test') {
steps {
sh 'Testing...'
}
}
stage('Deploy') {
steps {
sh 'Deploying...'
}
}
}
}
the pipeline step sh is used to execute linux cmd. Building is not a valid linux cmd, that's why you get the error.
If you want to print out some word you can use step echo which is cross-platform or execute the linux cmd: echo via step sh, like sh 'echo Building...' which only work on linux-like agent.
pipeline {
agent any
stages {
stage('build') {
steps {
echo 'Builing...'
}
}
stage('Test') {
steps {
sh 'echo Testing...'
}
}
stage('Deploy') {
steps {
echo 'Deploying...'
}
}
}
}
Related
I am using RedHat-7 system. And I want to Jenkins Pipeline to implement Devops.
But when I use docker buildx build feature, Jenkins says "unknown flag: --platform".
I run my Jenkins with docker image:
docker run -d \
--name jenkins \
--restart=unless-stopped \
-u 0 \
--network jenkins \
-p 8082:8080 \
-p 50000:50000 \
-v /home/ngtl/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-e TZ=Asia/Shanghai \
-e JAVA_OPTS=-Duser.timezone=Asia/Shanghai \
jenkins/jenkins:lts-jdk11
and this is my pipeline:
pipeline {
agent any
tools {
maven 'mvn'
}
environment {
DOCKER_CREDENTIALS = credentials('clouds3n-ldap')
}
stages {
stage('Unit Test') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn clean test -Dmaven.test.failure.ignore=false'
}
}
}
stage('Maven Build') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn package -Dmaven.test.skip -DskipTests'
}
}
}
stage('Sonar Scan') {
steps {
withSonarQubeEnv('sonarqube') {
withMaven(maven: 'mvn') {
script {
def allJob = env.JOB_NAME.tokenize('/') as String[]
def projectName = allJob[0]
sh "mvn sonar:sonar -Dsonar.branch.name=${env.GIT_BRANCH} -Dsonar.projectKey=${projectName} -Dsonar.projectName=${projectName} -Dmaven.test.skip -DskipTests"
}
}
}
}
}
stage('Sonar Gate') {
steps {
timeout(time: 30, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
stage('Docker Build') {
steps {
script {
def allJob = env.JOB_NAME.tokenize('/') as String[]
def projectName = allJob[0]
final noSuffixProjectName = projectName.substring(0, projectName.lastIndexOf('-'))
sh "echo ${DOCKER_CREDENTIALS_PSW} | docker login -u ${DOCKER_CREDENTIALS_USR} 192.168.2.157:8881 --password-stdin"
sh "docker buildx build --platform linux/amd64 -t 192.168.2.157:8881/uni/${noSuffixProjectName}:dev-${BUILD_NUMBER} -f ${env.JENKINS_HOME}/k8s-config/docker/BackendDockerfile . --push"
}
}
}
stage('Maven Deploy') {
steps {
withMaven(maven: 'mvn') {
sh 'mvn deploy -Dmaven.test.skip -DskipTests'
}
}
}
stage('K8s Apply') {
steps {
echo 'not support now, comming soon'
}
}
}
post {
always {
sh 'docker logout 192.168.2.157:8881'
}
cleanup {
cleanWs()
}
success {
echo 'Finished!'
}
}
}
When reach "Docker Build" stage, Jenkins will throw error :
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [DOCKER_CREDENTIALS_PSW]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
+ echo ****
+ docker login -u **** 192.168.2.157:8881 --password-stdin
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] sh
+ docker buildx build --platform linux/amd64 -t 192.168.2.157:8881/uni/cqu:dev-11 -f /var/jenkins_home/k8s-config/docker/BackendDockerfile . --push
unknown flag: --platform
See 'docker --help'.
Why Jenkins pipleline can not use "--platform" options? How to fix this problem ?
Make sure your Jenkins agent (slave) has recent version of docker.
BuildKit has been integrated to docker build since Docker 18.06 .
In my case version 18.09.6 did not work. 20.10 is good though.
I've inherited this Jenkinsfile stage that will run a new docker image using withRun:
stage('Deploy') {
steps {
script {
docker.image('deployscript:latest').withRun("""\
-e 'IMAGE=${IMAGE_NAME}:${BUILD_ID}' \
-e 'CNAME=${IMAGE_NAME}' \
-e 'PORT=${PORT_1}:80' \
-e 'PORT=${PORT_2}:443'""") { c ->
sh "docker logs ${c.id}"
}
}
}
}
However, I believe this method is only meant for testing purposes and actually stops the container once the block is finished. I want this step to actually run the container and stop/restart the previous one if necessary. The documentation out there on this is surprisingly sparse. Please help.
If you want to run the docker container throughout all the stages, thenthe example would look like below:
Scripted Pipeline
node('master') {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('alpine:latest').inside {
stage('01') {
sh 'echo STAGE01'
}
stage('02') {
sh 'echo STAGE02'
}
}
}
Declarative Pipeline
pipeline {
agent {
docker {
image 'alpine:latest'
label 'master'
args '-v /tmp:/tmp'
}
}
stages {
stage('01') {
steps {
sh "echo STAGE01"
}
}
stage('02') {
steps {
sh "echo STAGE02"
}
}
}
}
In both scripted and declarative pipelines, The docker container from the alpine image will active for all the stages to finish and only delete if the stage is a success or failure.
But If you would want to control start, stop, restart the container yourself on different stages, you can do it with bash command or by writing a small groovy script wrapping the docker command like below
node {
stage('init') {
docker create --name myImage1 -v $(pwd):/var/jenkins -w /var/jenkins imageName:tag
}
stage('build') {
// make use of docker command to start, stop and execute some script inside the container
// same goes for other stage
//once all done you can remove the container
docker rm myImage1
}
}
The following will stop the existing container and run a new one with the new image:
stage('Deploy') {
steps {
sh "docker stop ${IMAGE_NAME} || true && docker rm ${IMAGE_NAME} || true"
sh "docker run -d \
--name ${IMAGE_NAME} \
--publish ${PORT}:443 \
${IMAGE_NAME}:${BUILD_ID}"
}
}
I am running a jenkins docker image by doing this:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v /home/ec2-user/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkins/jenkins:lts
I have my jenkins server up but when I try to run a docker build image as below:
pipeline {
environment{
registry = "leexha/node_demo"
registyCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools{
nodejs "node"
}
stages {
stage('Git clone'){
steps{
git 'https://github.com/leeadh/node-jenkins-app-example.git'
}
}
stage('Installing Node') {
steps {
sh 'npm install'
}
}
stage ('Conducting Unit test'){
steps{
sh 'npm test'
}
}
stage ('Building image'){
steps{
script{
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage ('Pushing to Docker Hub'){
steps{
script{
docker.withRegistry('',registyCredential){
dockerImage.push()
}
}
}
}
}
}
it keeps telling me that dcoker is not found.
I already enabled the docker process to communicate via the -v /var/run/docker.sock:/var/run/docker.sock \
So im pretty confused now whats going on.
ANy help?
You need to install docker on Jenkins Server (insider the Jenkins image container). And install and config Jenkins plugin: docker on your Jenkins Server.
i am trying to ssh into a remote host and then execute certain commands on the remote host's shell. Following is my pipeline code.
pipeline {
agent any
environment {
// comment added
APPLICATION = 'app'
ENVIRONMENT = 'dev'
MAINTAINER_NAME = 'jenkins'
MAINTAINER_EMAIL = 'jenkins#email.com'
}
stages {
stage('clone repository') {
steps {
// cloning repo
checkout scm
}
}
stage('Build Image') {
steps {
script {
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no'
sh "echo pwd"
sh 'sudo -i -u root'
sh 'cd /opt/docker/web'
sh 'echo pwd'
}
}
}
}
}
}
But upon running this job it executes sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no' successfully but it stops there and does not execute any further commands. I want to execute the commands that are written after ssh command inside the remote host's shell. any help is appreciated.
I would try something like this:
sshagent(credentials : ['jenkins-pem']) {
sh "echo pwd"
sh 'ssh -t -t ubuntu#xx.xxx.xx.xx -o StrictHostKeyChecking=no "echo pwd && sudo -i -u root && cd /opt/docker/web && echo pwd"'
}
I resolve this issue
script
{
sh """ssh -tt login#host << EOF
your command
exit
EOF"""
}
stage("DEPLOY CONTAINER"){
steps {
script {
sh """
#!/bin/bash
sudo ssh -i /path/path/keyname.pem username#serverip << EOF
sudo bash /opt/filename.sh
exit 0
<< EOF
"""
}
}
}
There is a better way to run commands on remote using SSH. I know this is late answer but I just explored this thing so would like to share and this will help others to resolve this problem easily.
I just found this link helpful on how to run multiple commands on remote using SSH. Also we can run multiple commands conditionally as mentioned in above blog.
By going through it, I found the syntax:
ssh username#hostname "command1; command2;commandN"
Now, how to run command inside remote hots using SSH in Jenkins pipeline?
Here is the solution:
pipeline {
agent any
environment {
/*
define your command in variable
*/
remoteCommands =
"""java --version;
java --version;
java --version """
}
stages {
stage('Login to remote host') {
steps {
sshagent(['ubnt-creds']) {
/*
Provide variable as argument in ssh command
*/
sh 'ssh -tt username#hostanem $remoteCommands'
}
}
}
}
}
Firstly and optionally, you can define a variable that holds all commands separated by ;(semicolon) and then pass it as parameter in command.
Another way, you can also pass your commands directly to ssh command as
sh "ssh -tt username#hostanem 'command1;command2;commandN'"
I have used it in my code and it's working great!
see the output here
Happy Learning :)
This is the groovy script for a simple build pipeline that uses the docker image of SQL Server on Linux:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('build dacpac') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
sh 'docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux'
}
stage('deploy dacpac') {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
stage('run tests') {
PowerShell('Start-Sleep -s 5')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
I got to this point with some help with a question I posted a day or so ago, this was my attempt (with some help) at doing the same thing but with the docker plugin:
def PowerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';$psCmd;EXIT \$global:LastExitCode\""
}
node {
stage('git checkout') {
git 'file:///C:/Projects/SsdtDevOpsDemo'
}
stage('Build Dacpac from SQLProj') {
bat "\"${tool name: 'Default', type: 'msbuild'}\" /p:Configuration=Release"
stash includes: 'SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac', name: 'theDacpac'
}
stage('start container') {
docker.image('-e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxLocal2 -d -i -p 15566:1433 microsoft/mssql-server-linux').withRun() {
unstash 'theDacpac'
bat "\"C:\\Program Files\\Microsoft SQL Server\\140\\DAC\\bin\\sqlpackage.exe\" /Action:Publish /SourceFile:\"SsdtDevOpsDemo\\bin\\Release\\SsdtDevOpsDemo.dacpac\" /TargetConnectionString:\"server=localhost,15566;database=SsdtDevOpsDemo;user id=sa;password=P#ssword1\""
}
sh 'docker run -d --name SQLLinuxLocal2 microsoft/mssql-server-linux'
}
stage('sleep') {
PowerShell('Start-Sleep -s 30')
}
stage('cleanup') {
sh 'docker stop SQLLinuxLocal2'
sh 'docker rm SQLLinuxLocal2'
}
}
The problem with this is that although it works, the docker run -d line spins up a different incarnation of the carnation. Could someone please point me in the correct direction as to getting the same result as per the first pipeline but by using the docker plugin.