Do not see `/var/jenkins_home/workspace` directory - docker

I have started a jenkins container like so:
docker run -u $(id -u) -dit -p 49001:8080 -v "$(pwd)/jenkins_home":/var/jenkins_home --name jenkins jenkins/jenkins:latest
This basically starts and container and I am able to run my pipelines. I created a very simple pipeline like so:
pipeline {
agent any
stages {
stage('Hello') {
steps {
input message: 'Do you want to proceed', ok: 'Yes'
}
}
}
}
When I run the pipeline I see the following console output for this pipeline:
Running on Jenkins in /var/jenkins_home/workspace/first_pipeline
...
Do you wnat to proceed
Yes or Abort
I have let the pipeline hang there but not supplying the input and went into check the workspace directory inside docker container and I do not see any subdirectory workspace inside /var/jenkins_home
Any ideas why this is happening?

It turned out the issue was exactly what #Ian W mentioned in comment above. If I update the pipeline to actually create files then I can see the workspace being created. Here is the updated pipeline:
pipeline {
agent any
stages {
stage('Hello') {
steps {
writeFile(file: "test.txt", text: "Main workspace")
dir(pwd(tmp: true)) {
writeFile(file: "test.txt", text: "Temporary workspace")
}
input message: 'Do you want to proceed', ok: 'Yes'
echo "Test completed"
}
}
}
}

Related

Stuck with Jenkins not building my docker images

I'm running Jenkins as a container and for some reason Im having issues :D.
After the pipeline runs docker build -t testwebapp:latest . I get docker: Exec format error on the Build image stage
The pipeline command docker.build seems to do what it should so something is wrong with my env?
The Jenkins docker-compose include the docker.sock so the running jenkins should be allowed to piggyback of the host docker?
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Pipeline script defined in Jenkins:
pipeline {
agent any
stages {
stage('Initialize Docker') {
steps {
script {
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Checkout') {
steps {
git branch: 'main', url: 'github url'
}
}
stage('Build image') {
steps {
script {
docker.build("testwebapp:latest")
}
}
}
}
post {
failure {
script {
currentBuild.result = 'FAILURE'
}
}
}
}
The global tool configuration is pretty standard:
Jenkins global tool config

How to resolve ssh: not found in Jenkins Pipeline?

I got stuck in Jenkins Pipeline with ssh command. The error is:
+ ssh
/var/lib/jenkins/workspace/test-docker-jenkins#tmp/durable-2c3c7fb4/script.sh: line 1: ssh: not found
script returned exit code 127
My Jenkins File is:
pipeline {
agent {
docker {
image 'node:15.12.0-alpine'
}
}
stages {
stage("Prepare") {
steps {
sh "yarn"
}
}
stage("Build") {
steps {
sh "yarn build"
}
}
stage("Deploy") {
steps {
sh "ssh"
}
}
}
}
Does anyone know how to resolve this problem? Or is there anyway ssh to remote server in Jenkins Pipeline? Thank in advance. Have a good day!
You are trying to ssh from a docker container of image node:15.12.0-alpine and it doesn't contain ssh. From Jenkins, you can of course do SSH here is the SSH plugin of Jenkins and relevant documentation. https://www.jenkins.io/doc/pipeline/steps/ssh-steps/

Facing SSH connection issue during jenkins pipeline

I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}

Single SSH connection in Jenkins pipeline

I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.

Cannot specify flags when using variables for Docker agent args?

I am attempting to mount a volume for my Docker agent with Jenkins pipeline. The following is my JenkinsFile:
pipeline {
agent none
environment {
DOCKER_ARGS = '-v /tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '$DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
Sadly it fails to run, and the following can be seen from the pipeline.log file.
java.io.IOException: Failed to run image 'my-image:latest'. Error: docker: Error response from daemon: create /tmp/my-cache: " /tmp/my-cache" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.
However, the following JenkinsFile does work:
pipeline {
agent none
environment {
DOCKER_ARGS = '/tmp/my-cache:/home/my-cache'
}
stages {
stage('Build') {
agent {
docker {
image 'my-image:latest'
args '-v $DOCKER_ARGS'
}
}
steps {
sh 'ls -la /home'
}
}
}
}
The only difference is the -v flag is hardcoded outside of the environment variable.
I am new to Jenkins, so I have struggled to find any documentation on this behaviour. Could somebody please explain why I can't define my Docker agent args entirely in an environment variable?

Resources