I want to connect to connect my ap server from Jenkins using ssh.
I made ssh key in ap sever.
(id_rsa, authorized_keys (before id_rsa.pub)
I registered in Jenkins credentials my ssh key and password.
And I running my script, occurring this error
'Jenkins ssh permission denied (publickey, gssapi-keyex,gssapi-with-mic,password)'
I checking all ssh configuration, no problem (maybe..)
Anybody help me ㅠㅠ
This is my pipeline script
pipeline {
Agent any
Stages {
Stage {
Steps {
Sshagent(credentials:['my credential name']) {
Sh """
ssh -o StrictHostKeyChecking=no ${TARGET_HOST} "pwd"
"""
}
}
}
}
Environment {
TARGET_HOST ="username#ip"
}
}
Related
I got stuck in Jenkins Pipeline with ssh command. The error is:
+ ssh
/var/lib/jenkins/workspace/test-docker-jenkins#tmp/durable-2c3c7fb4/script.sh: line 1: ssh: not found
script returned exit code 127
My Jenkins File is:
pipeline {
agent {
docker {
image 'node:15.12.0-alpine'
}
}
stages {
stage("Prepare") {
steps {
sh "yarn"
}
}
stage("Build") {
steps {
sh "yarn build"
}
}
stage("Deploy") {
steps {
sh "ssh"
}
}
}
}
Does anyone know how to resolve this problem? Or is there anyway ssh to remote server in Jenkins Pipeline? Thank in advance. Have a good day!
You are trying to ssh from a docker container of image node:15.12.0-alpine and it doesn't contain ssh. From Jenkins, you can of course do SSH here is the SSH plugin of Jenkins and relevant documentation. https://www.jenkins.io/doc/pipeline/steps/ssh-steps/
I have the below pipeline.
pipeline {
agent any
environment {
PROJECT_ID = "*****"
IMAGE = "gcr.io/$PROJECT_ID/node-app"
BRANCH_NAME_NORMALIZED = "${BRANCH_NAME.toLowerCase().replace(" / ", "
_ ")}"
}
stages {
stage('Build') {
steps {
sh ' docker build -t ${IMAGE}:${BRANCH_NAME_NORMALIZED} . '
}
}
stage('Push') {
steps {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
}
sh ' gcloud auth configure-docker '
sh ' docker push $IMAGE:${BRANCH_NAME_NORMALIZED} '
}
}
stage('Deploy') {
steps {
withDockerContainer(image: "gcr.io/google.com/cloudsdktool/cloud-sdk", toolName: 'latest') {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials k8s --region us-central1 --project ${DEV_PROJECT}")
sh("kubectl get pods")
}
}
}
}
}
}
In Deploy stage it gives the following error :
gcloud auth activate-service-account --key-file=****
WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2020.02.05]: Permission denied.
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.auth.activate-service-account) Could not create directory [/.config/gcloud]: Permission denied.
Please verify that you have permissions to write to the parent directory.
I can't understand where this command wants to create a directory, docker container or in Host machine?
Have you got any similar problem ?
A better approach would be to Login to GKE via Kubernetes service account with token and using a kubeconfig file instead of activating a google service account.
This has several advantages including Kubernetes RBAC support, controlling blast radius should your credentials be compromised, etc. You can read more about using RBAC Authorization here.
You can set where gcloud stores it's configs using the environment variable CLOUDSDK_CONFIG
environment {
CLOUDSDK_CONFIG = "${env.WORKSPACE}"
}
I had the same problem and that worked for me.
Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins
I have a simple jenkins pipeline build, this is my jenkinsfile:
pipeline {
agent any
stages {
stage('deploy-staging') {
when {
branch 'staging'
}
steps {
sshagent(['my-credentials-id']) {
sh('git push joe#repo:project')
}
}
}
}
}
I am using sshagent to push to a git repo on a remote server. I have created credentials that point to a private key file in Jenkins master ~/.ssh.
When I run the build, I get this output (I replaced some sensitive info with *'s):
[ssh-agent] Using credentials *** (***#*** ssh key)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-cjbm7oVQaJYk/agent.11558
SSH_AGENT_PID=11560
$ ssh-add ***
Identity added: ***
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 11560 killed;
[ssh-agent] Stopped.
[TDBNSSBFW6JYM3BW6AAVMUV4GVSRLNALY7TWHH6LCUAVI7J3NHJQ] Running shell script
+ git push joe#repo:project
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
As you can see, the ssh-agent starts, stops immediately after and then runs the git push command. The weird thing is it did work correctly once but that seemed completely random.
I'm still fairly new to Jenkins - am I missing something obvious? Any help appreciated, thanks.
edit: I'm running a multibranch pipeline, in case that helps.
I recently had a similar issue though it was inside a docker container.
The logs gave the impression that ssh-agent exits too early but actually the problem was that I had forgotten to add the git server to known hosts.
I suggest ssh-ing onto your jenkins master and trying to do the same steps as the pipeline does with ssh-agent (the cli). Then you'll see where the problem is.
E.g:
eval $(ssh-agent -s)
ssh-add ~/yourKey
git clone
As explained on help.github.com
Update:
Here a util to add knownHosts if not yet added:
/**
* Add hostUrl to knownhosts on the system (or container) if necessary so that ssh commands will go through even if the certificate was not previously seen.
* #param hostUrl
*/
void tryAddKnownHost(String hostUrl){
// ssh-keygen -F ${hostUrl} will fail (in bash that means status code != 0) if ${hostUrl} is not yet a known host
def statusCode = sh script:"ssh-keygen -F ${hostUrl}", returnStatus:true
if(statusCode != 0){
sh "mkdir -p ~/.ssh"
sh "ssh-keyscan ${hostUrl} >> ~/.ssh/known_hosts"
}
}
I was using this inside docker, and adding it to my Jenkins master's known_hosts felt a bit messy, so I opted for something like this:
In Jenkins, create a new credential of type "Secret text" (let's call it GITHUB_HOST_KEY), and set its value to be the host key, e.g.:
# gets the host for github and copies it. You can run this from
# any computer that has access to github.com (or whatever your
# git server is)
ssh-keyscan github.com | clip
In your Jenkinsfile, save the string to known_hosts
pipeline {
agent { docker { image 'node:12' } }
stages {
stage('deploy-staging') {
when { branch 'staging' }
steps {
withCredentials([string(credentialsId: 'GITHUB_HOST_KEY', variable: 'GITHUB_HOST_KEY')]) {
sh 'mkdir ~/.ssh && echo "$GITHUB_HOST_KEY" >> ~/.ssh/known_hosts'
}
sshagent(['my-credentials-id']) {
sh 'git push joe#repo:project'
}
}
}
}
}
This ensures you're using a "trusted" host key.
Why do I get /v1/_ping: Bad Gateway errors when I follow the instructions for using artifactory plugin with docker?
jenkins 2.60.3 with Artifactory Plugin 2.12.2
Enable Build-Info proxy for Docker images on port 9999
jenkins /var/lib/jenkins/secrets/jfrog/certs/jfrog.proxy.crt added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
jfrog nginx self sign cert added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
access to jenkins:9999 open between hosts
/etc/systemd/system/docker.service.d/http-proxy.conf has contained the following with no difference to the tests
[Service]
Environment="HTTP_PROXY=http://jenkins:9999/"
[Service]
Environment="HTTPS_PROXY=https://jenkins:9999/"
Local docker test (docker login 127.0.0.1:9999) results in
Error response from daemon: Login: Bad Request to URI: /v1/users/ (Code: 400; Headers: map[Content-Length:[30] Content-Type:[text/html; chars...
Jenkins test results in com.github.dockerjava.api.exception.BadRequestException: Bad Request to URI: /images/artifactory:<port>/hello-world:latest/json
Errors in Jenkins log
SEVERE: (DISCONNECTED) [id: ..., L:0.0.0.0/0.0.0.0:... ! R:artifactory/...:5000]:
Caught an exception on ProxyToServerConnection
io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
...
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
My virtual repo, its remote and local work when I don't use the jenkins proxy but according to the plugin docs I require jenkins proxy to get the build info I need to CI/CD promotion.
Adding the certs to cacerts is somewhat less effective, if jenkins doesn't use that cert file. I'm unsure if adding a cert to a store requires a restart in jenkins, but it does seem to be the case for tomcat so that's probably just how jenkins works.
Configure jenkins instance to use a private keystore cloudbees doc on keystore
Copy $JENKINS_HOME/secrets/jfrog/certs/jfrog.proxy.crt to /etc/docker/certs.d/:/ca.crt
restart docker
Restart jenkins
test proxy via command line while tailing jenkins log - PASS
docker rmi artifactory:5000/hello-world:latest
docker pull artifactory:5000/hello-world:latest
This should use /etc/systemd/system/docker.service.d/http-proxy.conf HTTP_PROXY and go to jenkins proxy when then goes to the actual artifactory host. The required keys should be found in the store so ssl handshake will be good and v2 api used. If not, you'll see errors in jenkins.log
test helloworld on node via shell
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
sh "uname -a "
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-shelltest"
stage('login') {
sh "docker login ${registry} -u ${ARTIFACTORY_USER} -p ${ARTIFACTORY_PASSWORD}"
}
stage('pull and tag') {
sh "docker pull hello-world"
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
sh "docker push ${tag}"
}
}
}
test helloworld on node via artifactory plugin
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
def server = Artifactory.server "artifactory01"
def artDocker = Artifactory.docker(username: ARTIFACTORY_USER,
password: ARTIFACTORY_PASSWORD)
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-artifactoryTest"
def dockerInfo
stage('pull and tag') {
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
dockerInfo = artDocker.push "${tag}", "docker-local"
}
stage('publish') {
server.publishBuildInfo(dockerInfo)
}
}
}