Context: I am running a shell script on a remote machine through Jenkins. But while running I am getting "Host key verification failed." error on Jenkins log.
code snippet
#!/bin/sh
#Shell script for running the script from jenkin
#Performance Engineering Team
triggerPerformanceTest(){
echo "Starting the Jmeter script"
ssh -tt -i Test.ppk ubuntu#testserver << EOF
cd apache-jmeter-3.1/bin/
JVM_ARGS="-Xms512m -Xmx25000m" ./jmeter.sh -n -t /home/ubuntu/JMeter/Test.jmx
exit
EOF
echo "Test successfully executed"
}
triggerPerformanceTest
I can run the same query from my local machine through code editor(refer the screenshot attached).
Could some help me to resolve this issue? Note: I cannot access to Jenkins server so not able to do anything there.
Your remote machine key is not known by Jenkins. It's not present in ~/.ssh/known_hosts file.
This is a security to prevent man-in-the-middle attacks.
Someone has to add it for you or you will not be able to ssh to the remote machine.
Related
I have a Jenkins project which pulls and containerises changes from the given repo and then uses an Ansible playbook to deploy to the host machine/s. There are over 10 different server groups in my /etc/ansible/hosts file, all of which can be pinged successfully using ansible -m ping all and SSH'd into from the Jenkins machine.
I spun up a new VM, added it to the hosts file and used ssh-copy-id to add the Jenkins machine's public key. I received a pong from my ansible ping and successfully SSH'd into the machine. When the run the project I receive the following error:
TASK [Gathering Facts] *********************************************************
fatal: [my_machine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
The Jenkins project is virtually identical with my other projects and VM is the same as my other ones.
In the end I had to add host_key_checking = False into my /etc/ansible/ansible.cfg file but that is just a temporary fix.
Other answers online seem to show that the issue is with the SSH key but I don't believe this is true in my case as I can SSH into the machine. I would like to understand how to get rid of this error message and deploy without not checking the host key.
The remote host is in ~/.ssh/known_hosts.
Any help would be appreciated.
SSH to a remote host will verify the key in the remote host. And if ssh to a new machine, there will question ask you whether to add / trust the key. If you choose "Yes" the key will be saved in the ~/.ssh/known_hosts.
The message "Host key verification failed" implies that the key file of the remote host is not found / changed in the machine that run the Ansible script.
I normally resolve this problem by issuing a ssh to the remote host and add the key to the ~/.ssh/known_hosts file.
For me it helped to disable the host SSH key check in the Jenkins Job Configuration
I write Jenkins pipeline which in the end will trigger execution of java process on remote host. Currently this last stage looks like:
stage('end') {
sh '''
ssh jenkins#xxx.xxx.xxx.xxx java -jar /opt/stat/stat.jar
'''
}
The process successfully started on remote machine but Jenkins job never ends. Is there any flag telling job must be completed?
Seems like maybe your java command does not exit but stays running? And that's probably the desired behavior? What about putting the process in the background on the remote machine.
stage('end') {
sh '''
ssh jenkins#xxx.xxx.xxx.xxx "java -jar /opt/stat/stat.jar &>/dev/null &"
'''
}
On our project we want to deploy our .Net application to remote machine. For that purpose we have chosen PsExec tool. The propblem is that the commands that work fine in cmd don't work in Jenkins. They look the similar way in Jenkins
bat '%windir%\\sysnative\\PsExec.exe \\\\ipaddress -u user -p password -accepteula -h cmd /c "command" /q"'
Jenkins prints that Access is denied, although it works well in cmd. Why should I do? How it works differently in Jenkins and cmd? Maybe I'm doing something wrong?
Your Jenkins service must be launched by an admin user. Then you'll have access to these commands.
I am running Jenkins on Ubuntu 14.04 (Trusty Tahr) with slave nodes via SSH. We're able to communicate with the nodes to run most commands, but when a command requires a tty input, we get the classic
the input device is not a TTY
error. In our case, it's a docker exec -it command.
So I'm searching through loads of information about Jenkins, trying to figure out how to configure the connection to the slave node to enable the -t option to force tty instances, and I'm coming up empty. How do we make this happen?
As far as I know, you cannot give -t to the ssh that Jenkins fires up (which makes sense, as Jenkins is inherently detached). From the documentation:
When the SSH slaves plugin connects to a slave, it does not run an interactive shell. Instead it does the equivalent of your running "ssh slavehost command..." a few times...
However, you can defeat this in your build scripts by...
looping back to yourself: ssh -t localhost command
using a local PTY generator: script --return -c "command" /dev/null
I have jenkins running as Docker container, I tried to install jenkins build and publish plugin here and copied Dockerfile inside jenkins workspace, but whenever I run the build, it gives me:
Started by user Jenkins Admin
Building in workspace /var/lib/jenkins/jobs/workspace
[workspace] $ docker build -t index.docker.io/test/openshift:latest --pull=true /var/lib/jenkins/jobs/test/workspace
ERROR: Cannot run program "docker" (in directory "/var/lib/jenkins/jobs/workspace"): error=2, No such file or directory
java.io.IOException: Cannot run program "docker" (in directory "/var/lib/jenkins/jobs/workspace"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:803)
at hudson.Launcher$ProcStarter.start(Launcher.java:381)
Build step 'Docker Build and Publish' marked build as failure
Finished: FAILURE
could you please tell me why is that so?
Inside a Docker container you have no access to the docker-binary by default (hence the error message No such file or directory).
If you want to use Docker within a Docker container, you need to either use DinD (Docker-in-Docker) or DooD (Docker-outside-of-Docker).
The first is a separate Docker installation within your Jenkins-container, the second only mounts the hosts Docker installation via volumes.
Further reading about DinD in general and in regards to Jenkins:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
https://github.com/killercentury/docker-jenkins-dind
https://github.com/tehranian/dind-jenkins-slave
Further reading about DooD in general and in regards to Jenkins:
http://container-solutions.com/running-docker-in-jenkins-in-docker/
https://hub.docker.com/r/axltxl/jenkins-dood/
Update
The information on using the Workflow plugin below is no longer correct.
I have since written a plugin called docker-swarm-slave that offers a build-wrapper you can configure for a job which automatically provisions a Docker-container for a build, if you use my jenkins-dood-image or are running directly on bare metal.
Documentation unfortunately is rather sparse, but maybe it is useful to somebody.
I have a similar use-case: I want to be able to automatically start a Docker-container with a specified image running a Jenkins Swarm client that will take over the build.
My jenkins-dood-image contains a script docker-slave which lets me automatically provision a Docker-Swarm-slave and execute what I need on it using the Workflow-plugin with a script like the following:
node('master') {
stage 'Create docker-slave'
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'swarm-login', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
sh 'docker-slave --job-name $JOB_NAME --build-number $BUILD_NUMBER -i pitkley/python-swarm:3.4 -u $USERNAME -p $PASSWORD -- -labels "${JOB_NAME}_${BUILD_NUMBER}"'
}
stage 'Execute on docker-slave'
node("${env.JOB_NAME}_${env.BUILD_NUMBER}") {
sh 'hostname'
}
stage 'Remove docker-slave'
sh 'docker-slave --job-name $JOB_NAME --build-number $BUILD_NUMBER --rm'
}
(This assumes you need credentials to authenticate which are saved with a short-ID of swarm-credentials.)