I manage my pipeline thanks to a Jenkinsfile. I try to deploy my build app thanks to an ssh connection.
stage("Deploy") {
steps {
echo "Deploy project..."
sh """ssh -tt -o StrictHostKeyChecking=no -i /var/jenkins_home/.ssh/id_rsa admin#${PROJECT_IP_ADDRESS} 'mkdir /srv/projects/${PROJECT_NAME} -p && chmod 777 /srv/projects/${PROJECT_NAME} '"""
dir("/srv/projects"){
//sh """ssh -tt -o StrictHostKeyChecking=no -i /var/jenkins_home/.ssh/id_rsa admin#${PROJECT_IP_ADDRESS} 'mkdir /srv/projects/${PROJECT_NAME} && chmod 777 /srv/projects/${PROJECT_NAME}'"""
sh "scp -r ${PROJECT_NAME} admin#${PROJECT_IP_ADDRESS}:/srv/projects"
}
}
}
The SSH command is run, but the commands inside are not run :
+ ssh -tt -o StrictHostKeyChecking=no -i /var/jenkins_home/.ssh/id_rsa admin#<ip>
Warning: Permanently added '<ip>' (ECDSA) to the list of known hosts.
Linux <ip> #1 SMP Debian <version> (2021-03-19) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
The connection is successful, but the ssh connection is never stopped. When I run the command in a terminal with the Jenkins user, it's a success.
Anyone have an idea to fix this issue ?
Thanks
Related
I'm writing a jenkins pipeline jenkinsfile and within the script clause I have to ssh to a box and run some commands. I think the problem has to do with the env vars that I'm using within the quotes. I'm getting a ENDSSH command not found error and I'm at a loss. Any help would be much appreciated.
stage("Checkout my-git-repo"){
steps {
script {
sh """
ssh -o StrictHostKeyChecking=accept-new -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null -i ${JENKINS_KEY} ${JENKINS_KEY_USR}#${env.hostname} << ENDSSH
echo 'Removing current /opt/my-git-repo directory'
sudo rm -rf /opt/my-git-repo
echo 'Cloning new my-git-repo repo into /opt'
git clone ssh://${JENKINS_USR}#git.gitbox.com:30303/my-git-repo
sudo mv /home/jenkins/my-git-repo /opt
ENDSSH
"""
}
}
}
-bash: line 6: ENDSSH: command not found
I'm personally not familiar with jenkins, but I'd guess the issue is the whitespace before ENDSSH
White space in front of the delimiter is not allowed.
(https://linuxize.com/post/bash-heredoc/)
Try either removing the indentation:
stage("Checkout my-git-repo"){
steps {
script {
sh """
ssh -o StrictHostKeyChecking=accept-new -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null -i ${JENKINS_KEY} ${JENKINS_KEY_USR}#${env.hostname} << ENDSSH
echo 'Removing current /opt/my-git-repo directory'
sudo rm -rf /opt/my-git-repo
echo 'Cloning new my-git-repo repo into /opt'
git clone ssh://${JENKINS_USR}#git.gitbox.com:30303/my-git-repo
sudo mv /home/jenkins/my-git-repo /opt
ENDSSH
"""
}
}
}
OR ensure that the whitespace is only tabs and replace << with <<-:
Appending a minus sign to the redirection operator <<-, will cause all
leading tab characters to be ignored. This allows you to use
indentation when writing here-documents in shell scripts. Leading
whitespace characters are not allowed, only tab.
My Jenkins is lost connection with the Tomcat server. I also has added private key in Jenkins credentials.
This is my jenkinsfile for 'Deploy-toTomcat' stage
steps {
sshagent(['tomcat']) {
sh 'scp -o StrictHostKeyChecking=no target/*.war
ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war'
}
}
}
This is the error when I am trying to build the pipeline in Jenkins
+ scp -o StrictHostKeyChecking = no target/WebApp.war ubuntu#35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war
command-line line 0: missing argument.
lost connection
script returned exit code 1
error
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 139377 killed;
[ssh-agent] Stopped.
I also put command chmod 777 webapps
I am following this link https://www.youtube.com/watch?v=dSMSHGoHVJY&list=PLjNII-Jkdjfz5EXWlGMBRk63PC8uJsHMo&index=7 to deploy the tomcat.
Hope anyone knows can answer my question on how I want to deploy to tomcat. The source code that I test to build the pipeline also from https://github.com/cehkunal/webapp.git. Thank you.
The error is because it did not recognize which one the authorized keys. What I've done
delete all previous keys in ./ssh file,
ssh-keygen -t rsa
mv id_rsa.pub authorized_keys
chmod 0600 /home/username/.ssh/authorized_keys
chmod 0700 /home/username/.ssh
cat id_rsa
Finally insert id_rsa in manage credentials Jenkins.
I want to SSH into a server to perform some tasks in my Jenkins pipeline.
Here are the steps that I went through.
In my remote server, I used ssh-keygen to create id_rsa and id_rsa.pub
I copied the string in the id_rsa and pasted to the Private Key field in the Global Credentials menu in my Jenkins server.
In my Jenkinsfile, I do
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<ID>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
more ${KEY_FILE}
cat ${KEY_FILE} > ./key_key.key
eval $(ssh-agent -s)
chmod 600 ./key_key.key
ssh-add ./key_key.key
cd ~/.ssh
echo "ssh-rsa ... (the string from the server's id_rsa.pub)" >> authorized_keys
ssh root#<server_name> docker ps
'''
}
}
}
It pretty much creates an ssg-agent using the private key of the remote server and adds a public key to the authorized key.
This as a result gives me, Host key verification failed
I just simply wanted to ssh into the remote server, but I keep facing this issue. Any help?
LOG
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=26354;' export 'SSH_AGENT_PID;' echo Agent pid '26354;'
++ SSH_AUTH_SOCK=/tmp/ssh-xfcQYEfiyfRs/agent.26353
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=26354
++ export SSH_AGENT_PID
++ echo Agent pid 26354
Agent pid 26354
+ chmod 600 ./key_key.key
+ ssh-add ./key_key.key
Identity added: ./key_key.key (./key_key.key)
+ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key_key.key root#<server> docker ps
Warning: Permanently added '<server>, <IP>' (ECDSA) to the list of known hosts.
WARNING!!!
READ THIS BEFORE ATTEMPTING TO LOGON
This System is for the use of authorized users only. ....
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
It is failing because of StrictHostKeyChecking enabled. Change your ssh command as below and it should work fine.
ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root#<server_name> docker ps
StrictHostKeyChecking=no will disable the prompt for host key verification.
UserKnownHostsFile=/dev/null will skip the host key checking by sending the key to /dev/null
I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}