Unable connect Remote server thru SSH in Jenkins - jenkins

Scenario: While connecting Server asking mountpoint details dynamic, so getting below error
Script
node('agent') {
stage('Sync Repo') {
sshagent(['poc_ssh_key']) {
sh """
ssh -p XXX user#IP $mountpoint(Data003)
"""
}
}
Error
error:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This system has been onboarded to TPAM. Please use the TPAM interface link below to request privileged access to the server.
!! TPAM URL:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please enter the root path variable [data001/data002/data003/data004/data005]
You entered:

After lot of invesgation , found below solution and it's working as expected.
node('agent') {
stage('Sync Repo') {
sshagent(['poc_ssh_key']) {
sh """
echo 'data003' | ssh -p 2022 user#IP
"""
}
}

Related

Jenkins ssh permission denied (publickey, gssapi-keyex,gssapi-with-mic,password)

I want to connect to connect my ap server from Jenkins using ssh.
I made ssh key in ap sever.
(id_rsa, authorized_keys (before id_rsa.pub)
I registered in Jenkins credentials my ssh key and password.
And I running my script, occurring this error
'Jenkins ssh permission denied (publickey, gssapi-keyex,gssapi-with-mic,password)'
I checking all ssh configuration, no problem (maybe..)
Anybody help me ㅠㅠ
This is my pipeline script
pipeline {
Agent any
Stages {
Stage {
Steps {
Sshagent(credentials:['my credential name']) {
Sh """
ssh -o StrictHostKeyChecking=no ${TARGET_HOST} "pwd"
"""
}
}
}
}
Environment {
TARGET_HOST ="username#ip"
}
}

Jenkinsfile pipeline stage error in gcloud

I have the below pipeline.
pipeline {
agent any
environment {
PROJECT_ID = "*****"
IMAGE = "gcr.io/$PROJECT_ID/node-app"
BRANCH_NAME_NORMALIZED = "${BRANCH_NAME.toLowerCase().replace(" / ", "
_ ")}"
}
stages {
stage('Build') {
steps {
sh ' docker build -t ${IMAGE}:${BRANCH_NAME_NORMALIZED} . '
}
}
stage('Push') {
steps {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
}
sh ' gcloud auth configure-docker '
sh ' docker push $IMAGE:${BRANCH_NAME_NORMALIZED} '
}
}
stage('Deploy') {
steps {
withDockerContainer(image: "gcr.io/google.com/cloudsdktool/cloud-sdk", toolName: 'latest') {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials k8s --region us-central1 --project ${DEV_PROJECT}")
sh("kubectl get pods")
}
}
}
}
}
}
In Deploy stage it gives the following error :
gcloud auth activate-service-account --key-file=****
WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2020.02.05]: Permission denied.
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.auth.activate-service-account) Could not create directory [/.config/gcloud]: Permission denied.
Please verify that you have permissions to write to the parent directory.
I can't understand where this command wants to create a directory, docker container or in Host machine?
Have you got any similar problem ?
A better approach would be to Login to GKE via Kubernetes service account with token and using a kubeconfig file instead of activating a google service account.
This has several advantages including Kubernetes RBAC support, controlling blast radius should your credentials be compromised, etc. You can read more about using RBAC Authorization here.
You can set where gcloud stores it's configs using the environment variable CLOUDSDK_CONFIG
environment {
CLOUDSDK_CONFIG = "${env.WORKSPACE}"
}
I had the same problem and that worked for me.

Single SSH connection in Jenkins pipeline

I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.

How do I set up postgres database in Jenkins pipeline?

I am using docker to simulate postgres database for my app. I was testing it in Cypress for some time and it works fine. I want to set up Jenkins for further testing, but I seem stuck.
On my device, I would use commands
docker create -e POSTGRES_DB=myDB -p 127.0.0.1:5432:5432 --name myDB postgres
docker start myDB
to create it. How can I simulate this in Jenkins pipeline? I need the DB for the app to work.
I use Dockerfile as my agent, and I have tried putting the ENV variables there, but it does not work. Docker is not installed on the pipeline.
The way I see it is either:
Create an image by using a
Somehow install docker inside the pipeline and use the same commands
Maybe with master/slave nodes? I don't understand them well yet.
This might be a use case for sidecar pattern one of Jenkins Pipeline's advanced features.
For example (from the above site):
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by withRun, which has the
running container’s ID available via the id property. Using the
container’s ID, the Pipeline can create a link by passing custom
Docker arguments to the inside() method.
Best thing is that the containers should be automatically stopped and removed when the work is done.
EDIT:
To use docker network instead you can do the following (open Jira to support this OOTB). Following helper function
def withDockerNetwork(Closure inner) {
try {
networkId = UUID.randomUUID().toString()
sh "docker network create ${networkId}"
inner.call(networkId)
} finally {
sh "docker network rm ${networkId}"
}
}
Actual usage
withDockerNetwork{ n ->
docker.image('sidecar').withRun("--network ${n} --name sidecar") { c->
docker.image('main').inside("--network ${n}") {
// do something with host "sidecar"
}
}
}
For declarative pipelines:
pipeline {
agent any
environment {
POSTGRES_HOST = 'localhost'
POSTGRES_USER = myuser'
}
stages {
stage('run!') {
steps {
script {
docker.image('postgres:9.6').withRun(
"-h ${env.POSTGRES_HOST} -e POSTGRES_USER=${env.POSTGRES_USER}"
) { db ->
// You can your image here but you need psql to be installed inside
docker.image('postgres:9.6').inside("--link ${db.id}:db") {
sh '''
psql --version
until psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES-=1)) remaining attempts..."
sleep 1
done
'''
sh 'echo "your commands here"'
}
}
}
}
}
}
}
Related to Docker wait for postgresql to be running

Jenkins ssh-agent starts and then stops immediately in pipeline build

I have a simple jenkins pipeline build, this is my jenkinsfile:
pipeline {
agent any
stages {
stage('deploy-staging') {
when {
branch 'staging'
}
steps {
sshagent(['my-credentials-id']) {
sh('git push joe#repo:project')
}
}
}
}
}
I am using sshagent to push to a git repo on a remote server. I have created credentials that point to a private key file in Jenkins master ~/.ssh.
When I run the build, I get this output (I replaced some sensitive info with *'s):
[ssh-agent] Using credentials *** (***#*** ssh key)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-cjbm7oVQaJYk/agent.11558
SSH_AGENT_PID=11560
$ ssh-add ***
Identity added: ***
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 11560 killed;
[ssh-agent] Stopped.
[TDBNSSBFW6JYM3BW6AAVMUV4GVSRLNALY7TWHH6LCUAVI7J3NHJQ] Running shell script
+ git push joe#repo:project
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
As you can see, the ssh-agent starts, stops immediately after and then runs the git push command. The weird thing is it did work correctly once but that seemed completely random.
I'm still fairly new to Jenkins - am I missing something obvious? Any help appreciated, thanks.
edit: I'm running a multibranch pipeline, in case that helps.
I recently had a similar issue though it was inside a docker container.
The logs gave the impression that ssh-agent exits too early but actually the problem was that I had forgotten to add the git server to known hosts.
I suggest ssh-ing onto your jenkins master and trying to do the same steps as the pipeline does with ssh-agent (the cli). Then you'll see where the problem is.
E.g:
eval $(ssh-agent -s)
ssh-add ~/yourKey
git clone
As explained on help.github.com
Update:
Here a util to add knownHosts if not yet added:
/**
* Add hostUrl to knownhosts on the system (or container) if necessary so that ssh commands will go through even if the certificate was not previously seen.
* #param hostUrl
*/
void tryAddKnownHost(String hostUrl){
// ssh-keygen -F ${hostUrl} will fail (in bash that means status code != 0) if ${hostUrl} is not yet a known host
def statusCode = sh script:"ssh-keygen -F ${hostUrl}", returnStatus:true
if(statusCode != 0){
sh "mkdir -p ~/.ssh"
sh "ssh-keyscan ${hostUrl} >> ~/.ssh/known_hosts"
}
}
I was using this inside docker, and adding it to my Jenkins master's known_hosts felt a bit messy, so I opted for something like this:
In Jenkins, create a new credential of type "Secret text" (let's call it GITHUB_HOST_KEY), and set its value to be the host key, e.g.:
# gets the host for github and copies it. You can run this from
# any computer that has access to github.com (or whatever your
# git server is)
ssh-keyscan github.com | clip
In your Jenkinsfile, save the string to known_hosts
pipeline {
agent { docker { image 'node:12' } }
stages {
stage('deploy-staging') {
when { branch 'staging' }
steps {
withCredentials([string(credentialsId: 'GITHUB_HOST_KEY', variable: 'GITHUB_HOST_KEY')]) {
sh 'mkdir ~/.ssh && echo "$GITHUB_HOST_KEY" >> ~/.ssh/known_hosts'
}
sshagent(['my-credentials-id']) {
sh 'git push joe#repo:project'
}
}
}
}
}
This ensures you're using a "trusted" host key.

Resources