Jenkins master to slave error: Host key verification failed - jenkins

I'm setting up an automation test on Jenkins. I'm trying to run a script remotely from one Linux machine(master machine, same machine as my Jenkins server) and calling a bunch of other scripts on another Linux machine(slave machine). However I'm getting this error on my first ssh command
Host key verification failed.
I'm pretty sure there is no problem for the passwordless connection from master to slave because I've run other tests previously using the same master/slave machine.
I run the exact same command manually on my master and it successfully returned the expected result. I don't know why it just doesn't work for the automation test.
All I wanted to do in this command is to check if a package is already installed (my OS is CentOS 7 for both machines)
ssh ${USERNAME}#${IP_ADDR} 'rpm -qa | grep ${MY_PACKAGE}'
I'm just checking the existence of the package before I proceed to more commands specific to this package.

ssh_opts='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
ssh $ssh_opts ${USERNAME}#${IP_ADDR} 'rpm -qa | grep ${MY_PACKAGE}'
Try this in your shell script. It disables the host key verification check for the hosts.

Eventually I figured what was wrong with it.
When I was exchanging ssh host key from the two machines, I didn't use root user. However, when the test was running through Jenkins, it was using "sudo" to run the target script on the slave test machine, which means it was reading a ssh host key from the "known_hosts" file for root user, not the one I configured for the test user account (a non-root user account)!
I merged my two "known_hosts" files for both the test user and the root user, then the problem was fixed because now Jenkins master could access my slave test machine through root user or my test user account.

Related

How to identify commands ran by Ansible on a remote host in Falco context?

I would like to know if someone has an idea about how to identify commands ran by Ansible within a remote host.
To give you more context I'm gonna describe my workflow in-depth:
I have a scheduled job between 1 am to 6 am which runs a compliance Ansible playbook to ensure the production servers configuration are up to date and well configured, however, this playbook change some files inside the /etc folder.
Besides this, I have a Falco stack which keeps an eye on what is going on the production servers and raises alerts when an event that I describe as suspicious is found (It can be a syscall/ network connection/ sensitive file editing "/etc/passwd, pam.conf, ..." etc...
So the problem I'm running through is, my playbook triggers some alerts for example:
Warning Sensitive file opened for reading by non-trusted program (user=XXXX user_loginuid=XXX program=python3 command=python3 file=/etc/shadow parent=sh gparent=sudo ggparent=sh gggparent=sshd container_id=host image=<NA>)
My question is, can we set a "flag or prefix" to all Ansible commands, which will allow me to whitelist this flag of prefix and avoid triggering my alerts for nothing.
PS: whitelisting python3 for the user root is not a solution in my opinion.
Ansible is python tool, so the process accessing the file will be python3. The commands that Ansible executes are based on the steps that are in the playbook.
You can solve your problem by modifying the falco rules. You can evaluating the proc.pcmdline in falcon rule and the chain of the proc.aname to identify that the command was executed by the ansible process (ex. process is python3, parent is sh grandparent is sudo, etc.)

Google Cloud Console Jenkins password

I readed a lot of documentation.
I setup a Jenkins on GCC using kubernetes default creation.
When I go to enter, jenkins ask me about a password to unlock.
Im unable to find that password.
Thanks
Access the Jenkins container via cloud shell.
Get fist get the pod id :
kubectl get pods --namespace=yourNamespace
jenkins-867df9fcb8-ctfq5 1/1 Running 0 16m
Then execute a bash on the pod Id :
kubectl exec -it --namespace=yourNamespace jenkins-867df9fcb8-ctfq5 -- bash
Then just cd to the directory where the initialAdminPassword is saved and use the "cat" command to print its value.
the password will be in a file under folder /secrects/initialadminpassword.
You can go inside the container in case volume mapping is not done
I've had the same issue when creating a jenkins on a gke cluster and I couldn't even found the initialAdminPassword (tried to look inside the volume with no chances)...
As I was looking to have authentication on the cluster, I just created my own image with the google oauth plugin and a groovy file using this repo as a model: https://github.com/Sho2010/jenkins-google-login
This way when connecting I can use my google account. If you need other auth method, you should be able to found them on the net.
In the case you just want to test Jenkins and you don't need a password use the JAVA_OPTS without running the setup like this:
- name: JAVA_OPTS
value: -Xmx4096m -Djenkins.install.runSetupWizard=false
If you are using the basic jenkins image you shouldn't have any password and have full access to your jenkins (Don't live it like this if you are going to create production ready jobs)
For GKE Marketplace "click to deployment" Jenkins, instruction is pretty simple, and can be found in the application "Next steps" Description part after deployment:
Access Jenkins instance.
Identify HTTPS endpoint.
echo https://$(kubectl -njenkins get ingress -l "app.kubernetes.io/name=jenkins-1" -ojsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")/
For HTTPS you have to accept a certificate (we created a temporary one for you).
Now you need a password.
kubectl -njenkins exec \
$(kubectl -njenkins get pod -oname | sed -n /\\/jenkins-1-jenkins-deployment/s.pods\\?/..p) \
cat /var/jenkins_home/secrets/initialAdminPassword
To fully configure Jenkins instance follow on screen instructions.
I've tested it and it works as expected.
Another guide with almost the same steps can be found here
Jenkins docker image usually shows the initial password in the container log.

Installing Jenkins-X on GKE

This may sound like a stupid question, but I am installing Jenkins-X on a Kubernetes cluster on GKE. When I install through Cloud Shell, the /usr/local/bin folder I am moving it to is refreshed every time the shell is restarted.
My question is two-fold:
Am I correct in installing Jenkins-X through Cloud Shell (and not on a particular node)?
How can I get it so the /jx folder is available when the Cloud Shell is restarted (or at least have the /jx folder on the path at all times)?
I am running jx from the Cloud shell
In the cloud shell you are already logged in and you have a project configured. To prevent jx to re-login to google cloud/project use the following arguments
jx create cluster gke --skip-login=true --project-id projectId
download jx to ~/bin and update $PATH to include both ~/bin and ~/.jx/bin. Put the following to ~/.profile
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
PATH="$HOME/.jx/bin:$PATH"
The .jx/bin is the place where JX downloads helm if needed.
Google Cloud Shell VMs are ephemeral and they are discarded shortly after the end of a session. However, your home directory persists, so anything installed in the home directory will remain from session to session.
I am not familiar with Jenkins-X. If it requires a daemon process running in the background, Cloud Shell is not a good option and you should probably set up a GCE instance. If you just need to run some command-line utilities to control a GKE cluster, make sure that whatever you install goes into your home directory where it will persist across Cloud Shell sessions.

Jenkins not recognizing "docker" command on Windows 7

I have installed Jenkins and Docker ToolBox on same machine running on Windows 7 .
While running Jenkins build, all the commands work fine except docker.
When I try to run the docker command in build step using Jenkins, it gives me error.
E:\Jenkins\workspace\docker-app>docker build -t docker-app.
'docker' is not recognized as an internal or external command,
operable program or batch file.
But the same command works fine for windows command prompt.
Any help would be much appreciated.
I had exactly same issues until i added docker path to system variable
add the path command to your jJenkins job , make sure it includes docker
As per your description it seems to me that,
You have windows 7 machine with docker toolbox installed.
You are running Jenkins inside one of container?
If yes then you won't able to run docker commands from Jenkins box.
Because You are running Jenkins inside Docker container and Docker is not installed in your docker container that's why it will throw error as 'docker' is not recognized as an internal or external command, operable program or batch file and which is right.
To get this working you need to install Docker inside your docker container that concept is called "Docker-in-Docker".
If you need any help/clarification regarding this then please let me know.
i came across the same issue some time back, hope it will help anyone down the line
even adding docker toolbox in the environment variables didn't work for me
this is what i did
1) go to jenkins --> Manage Jenkins --> configure system
2) go to Global properties section
3) add the following environment variables
a) DOCKER_CERT_PATH = C:\Users\%USER%.docker\machines\default
b) DOCKER_HOST = tcp://192.168.99.XX:2376 ( it might be different in your case )
c) DOCKER_MACHINE_NAME = default
d) DOCKER_TLS_VERIFY = 1
if the problem still persists after the above changes
4) add git binary path to environment variables system path
a) in my case it was C:\Program Files\Git\usr\bin

Communication between booted Vagrant Virtual Machine and Jenkins

I am trying to create a VM to run few tests and destroy once done. I am using Jenkins 'Boot up Vagrant VM' option to boot up a VM and using chef to install required packages and run the tests in it. When testing is completed in this VM, is there any way it(VM) can communicate the results back to the job in Jenkins which triggered it?
I am stuck with this part.
I have implemented booting up of VM based on the custom vagrant box which has all essential packages and softwares required to run the tests.
First of all thanks to Markus, who if had left an answer, I'd surely accept it.
I edited the Vagrantfile to add synched folders to
config.vm.synced_folder "host/","/guest".
It creates guest folder in the VM and the host folder which we created on the host system will also reflect on the VM.
All I did then as Markus suggested was do a polling from Jenkins (using Files Found trigger plugin) to some folder to search for some specific file that one is expected to see/communicated from VM.
In VM, whenever the testing is done, I'd simply put the result in host folder and it'd automatically reflect in my local machine, in the folder which Jenkins is polling and it will build the project whichever is polling this folder and ta dahhh ....!

Resources