I'm running Jenkins as a container and for some reason Im having issues :D.
After the pipeline runs docker build -t testwebapp:latest . I get docker: Exec format error on the Build image stage
The pipeline command docker.build seems to do what it should so something is wrong with my env?
The Jenkins docker-compose include the docker.sock so the running jenkins should be allowed to piggyback of the host docker?
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Pipeline script defined in Jenkins:
pipeline {
agent any
stages {
stage('Initialize Docker') {
steps {
script {
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Checkout') {
steps {
git branch: 'main', url: 'github url'
}
}
stage('Build image') {
steps {
script {
docker.build("testwebapp:latest")
}
}
}
}
post {
failure {
script {
currentBuild.result = 'FAILURE'
}
}
}
}
The global tool configuration is pretty standard:
Jenkins global tool config
Related
I want to connect to my server and then clone the project from my repo to the server in question and do a docker build. Could you show me a template? Because with my own code it only does the build on jenkins. It doesn't connect to the server to do the pull.
`
pipeline {
agent any
stages {
stage('Pulling our project') {
steps{
withCredentials([gitUsernamePassword(credentialsId: 'GitlabCred')]) {
sh 'git pull origin jks'
}
}
}
stage('Building our project') {
agent any
steps {
sh 'docker compose up -d --build'
}
}
}
}
`
I need to build a nodeJS application which will connect to a mysql DB and retrieve a string from the database ‘hello_world’. The infrastructure is built on a Minikube cluster running on MacOS. Docker Desktop is also installed.
There are 2 environments - dev and prod(each environment has its own namespace). There is already a MySQL DB pod setup in the dev namespace.
I have this working pipeline but wish to incorporate the testing stage. As of now, it just builds the nodeJS application inside a docker image which will be pushed to DockerHub. Then a stage will create a temporary K8 pod to pull and install the image(as a pod) in the ‘dev’ namespace via helm.
I see there are 2 options where I can insert this test stage.
In the docker environment, to insert the test stage after the docker image is built and started. Meaning there has to be a MySQL DB already setup inside the docker environment else the nodeJS application will fail.
After the image is setup as a pod inside Minikube, the testing stage can then start in the dev namespace.
The testing stage should check if s specific string is returned when the web page is loaded. If not the pipeline should fail and stop executing.
My question is base on real world use case, at which point should I start the testing stage and how do I achieve that to meet the above requirements?
Thanks.
pipeline {
agent { label 'slave' }
environment {
DOCKERHUB_CREDENTIALS = credentials('DOCKERHUB_LOGIN')
}
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'main', name: 'BRANCH', type: 'PT_BRANCH'
}
tools {
nodejs "nodejs"
}
stages {
stage('Clone Code Repository') {
steps {
git branch: "${params.BRANCH}", url: "${params.GITHUB_REPO}"
}
}
stage('Download docker binary') {
steps{
script {
def dockerHome = tool 'docker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Install NPM application') {
steps {
sh 'npm install'
}
}
stage('Docker Build and Tag') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh "docker build -t ${params.DOCKER_IMAGE} ."
}
}
}
}
stage('DockerHub Login') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
}
}
}
}
stage('Push Image to DockerHub') {
steps {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh "docker push ${params.DOCKER_IMAGE}"
}
}
}
}
stage("Deploy to Dev") {
agent {
kubernetes {
inheritFrom "helm-dev"
cloud "kubernetes-dev"
}
}
steps {
container("helm-dev") {
git branch: "${params.BRANCH}", url: "${params.GITHUB_REPO}"
sh "cd Development; helm upgrade -i dev-webapps ./dev-webapps"
}
}
}
}
post {
always {
script {
docker.withServer("${params.DOCKER_URL}", 'DOCKERHOST_LOGIN'){
sh 'docker logout'
}
}
}
}
}
I got stuck in Jenkins Pipeline with ssh command. The error is:
+ ssh
/var/lib/jenkins/workspace/test-docker-jenkins#tmp/durable-2c3c7fb4/script.sh: line 1: ssh: not found
script returned exit code 127
My Jenkins File is:
pipeline {
agent {
docker {
image 'node:15.12.0-alpine'
}
}
stages {
stage("Prepare") {
steps {
sh "yarn"
}
}
stage("Build") {
steps {
sh "yarn build"
}
}
stage("Deploy") {
steps {
sh "ssh"
}
}
}
}
Does anyone know how to resolve this problem? Or is there anyway ssh to remote server in Jenkins Pipeline? Thank in advance. Have a good day!
You are trying to ssh from a docker container of image node:15.12.0-alpine and it doesn't contain ssh. From Jenkins, you can of course do SSH here is the SSH plugin of Jenkins and relevant documentation. https://www.jenkins.io/doc/pipeline/steps/ssh-steps/
I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.
I am attempting to mount a volume for my Docker agent with Jenkins pipeline. The following is my JenkinsFile:
pipeline {
agent none
environment {
DOCKER_ARGS = '-v /tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '$DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
Sadly it fails to run, and the following can be seen from the pipeline.log file.
java.io.IOException: Failed to run image 'my-image:latest'. Error: docker: Error response from daemon: create /tmp/my-cache: " /tmp/my-cache" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.
However, the following JenkinsFile does work:
pipeline {
agent none
environment {
DOCKER_ARGS = '/tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '-v $DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
The only difference is the -v flag is hardcoded outside of the environment variable.
I am new to Jenkins, so I have struggled to find any documentation on this behaviour. Could somebody please explain why I can't define my Docker agent args entirely in an environment variable?