Can anyone let me know how to connect remote server from Jenkins (server1).
That is, how to do ssh via command line and a job?
sudo ssh user#server2
For freestyle jobs, you would use the Jenkins SSH plugin.
For pipelines, you have pipeline SSH steps which does the same:
node {
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "ls -lrt"
sshCommand remote: remote, command: "for i in {1..5}; do echo -n \"Loop \$i \"; date ; sleep 1; done"
}
}
Related
Here is my issue I have a service(nhasi. service) running on a remote Linux server,, I have created my Jenkins pipeline which copies my jar file to the remote server. So I want when copying file is done, the nhasi.service has to be restarted. I have tried the command below
sh 'sudo systemctl restart nhasi.service'
but I am getting the following errorsystemctl: command not found
My Jenkins server is running on a windows server
sh 'sudo systemctl restart nhasi.service' will execute the command in the Jenkins machine itself. So you have SSH into the remote machine and execute the commans. You can use something like SSH Step for this.
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "systemctl restart nhasi.service"
}
Another option is to add the remote server as a Jenkins agent and execute the command on the remote Agent.
Thank you #ycr, your answer helped to find the final solution , i implemented as below
stage("Restarting Service") {
steps{
script{
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
sshCommand remote: remote, command: "systemctl restart nhasi.service"
}
}
}
I have to write a declarative Jenkins pipeline which will be able to
ssh into remote ubuntu VM's launched through Ansible tower
switch to sudo user
run a few scripts
But I'm unable to establish connectivity to the ubuntu machine.
I referred to https://github.com/jenkinsci/ssh-steps-plugin
assuming the password to be provided while configuring the credentials (while creating credentialsId) is the password of my ubuntu machine.
Could you verify what could go wrong?
pipeline {
agent any
stages {
stage ('SSH') {
steps {
script{
remote = [:]
remote.name = "xxxx"
remote.host = "xxxx"
remote.allowAnyHosts = true
remote.failOnError = true
withCredentials([usernamePassword(credentialsId: 'xxxx', passwordVariable: 'password', usernameVariable: 'username')]) {
remote.user = username
remote.password = password
sshCommand remote: remote, command: "ls -l"
}
}
}
}
}
}
I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins
Why do I get /v1/_ping: Bad Gateway errors when I follow the instructions for using artifactory plugin with docker?
jenkins 2.60.3 with Artifactory Plugin 2.12.2
Enable Build-Info proxy for Docker images on port 9999
jenkins /var/lib/jenkins/secrets/jfrog/certs/jfrog.proxy.crt added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
jfrog nginx self sign cert added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
access to jenkins:9999 open between hosts
/etc/systemd/system/docker.service.d/http-proxy.conf has contained the following with no difference to the tests
[Service]
Environment="HTTP_PROXY=http://jenkins:9999/"
[Service]
Environment="HTTPS_PROXY=https://jenkins:9999/"
Local docker test (docker login 127.0.0.1:9999) results in
Error response from daemon: Login: Bad Request to URI: /v1/users/ (Code: 400; Headers: map[Content-Length:[30] Content-Type:[text/html; chars...
Jenkins test results in com.github.dockerjava.api.exception.BadRequestException: Bad Request to URI: /images/artifactory:<port>/hello-world:latest/json
Errors in Jenkins log
SEVERE: (DISCONNECTED) [id: ..., L:0.0.0.0/0.0.0.0:... ! R:artifactory/...:5000]:
Caught an exception on ProxyToServerConnection
io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
...
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
My virtual repo, its remote and local work when I don't use the jenkins proxy but according to the plugin docs I require jenkins proxy to get the build info I need to CI/CD promotion.
Adding the certs to cacerts is somewhat less effective, if jenkins doesn't use that cert file. I'm unsure if adding a cert to a store requires a restart in jenkins, but it does seem to be the case for tomcat so that's probably just how jenkins works.
Configure jenkins instance to use a private keystore cloudbees doc on keystore
Copy $JENKINS_HOME/secrets/jfrog/certs/jfrog.proxy.crt to /etc/docker/certs.d/:/ca.crt
restart docker
Restart jenkins
test proxy via command line while tailing jenkins log - PASS
docker rmi artifactory:5000/hello-world:latest
docker pull artifactory:5000/hello-world:latest
This should use /etc/systemd/system/docker.service.d/http-proxy.conf HTTP_PROXY and go to jenkins proxy when then goes to the actual artifactory host. The required keys should be found in the store so ssl handshake will be good and v2 api used. If not, you'll see errors in jenkins.log
test helloworld on node via shell
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
sh "uname -a "
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-shelltest"
stage('login') {
sh "docker login ${registry} -u ${ARTIFACTORY_USER} -p ${ARTIFACTORY_PASSWORD}"
}
stage('pull and tag') {
sh "docker pull hello-world"
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
sh "docker push ${tag}"
}
}
}
test helloworld on node via artifactory plugin
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
def server = Artifactory.server "artifactory01"
def artDocker = Artifactory.docker(username: ARTIFACTORY_USER,
password: ARTIFACTORY_PASSWORD)
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-artifactoryTest"
def dockerInfo
stage('pull and tag') {
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
dockerInfo = artDocker.push "${tag}", "docker-local"
}
stage('publish') {
server.publishBuildInfo(dockerInfo)
}
}
}