I have to write a declarative Jenkins pipeline which will be able to
ssh into remote ubuntu VM's launched through Ansible tower
switch to sudo user
run a few scripts
But I'm unable to establish connectivity to the ubuntu machine.
I referred to https://github.com/jenkinsci/ssh-steps-plugin
assuming the password to be provided while configuring the credentials (while creating credentialsId) is the password of my ubuntu machine.
Could you verify what could go wrong?
pipeline {
agent any
stages {
stage ('SSH') {
steps {
script{
remote = [:]
remote.name = "xxxx"
remote.host = "xxxx"
remote.allowAnyHosts = true
remote.failOnError = true
withCredentials([usernamePassword(credentialsId: 'xxxx', passwordVariable: 'password', usernameVariable: 'username')]) {
remote.user = username
remote.password = password
sshCommand remote: remote, command: "ls -l"
}
}
}
}
}
}
Related
Here is my issue I have a service(nhasi. service) running on a remote Linux server,, I have created my Jenkins pipeline which copies my jar file to the remote server. So I want when copying file is done, the nhasi.service has to be restarted. I have tried the command below
sh 'sudo systemctl restart nhasi.service'
but I am getting the following errorsystemctl: command not found
My Jenkins server is running on a windows server
sh 'sudo systemctl restart nhasi.service' will execute the command in the Jenkins machine itself. So you have SSH into the remote machine and execute the commans. You can use something like SSH Step for this.
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "systemctl restart nhasi.service"
}
Another option is to add the remote server as a Jenkins agent and execute the command on the remote Agent.
Thank you #ycr, your answer helped to find the final solution , i implemented as below
stage("Restarting Service") {
steps{
script{
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
sshCommand remote: remote, command: "systemctl restart nhasi.service"
}
}
}
I want to connect to connect my ap server from Jenkins using ssh.
I made ssh key in ap sever.
(id_rsa, authorized_keys (before id_rsa.pub)
I registered in Jenkins credentials my ssh key and password.
And I running my script, occurring this error
'Jenkins ssh permission denied (publickey, gssapi-keyex,gssapi-with-mic,password)'
I checking all ssh configuration, no problem (maybe..)
Anybody help me ㅠㅠ
This is my pipeline script
pipeline {
Agent any
Stages {
Stage {
Steps {
Sshagent(credentials:['my credential name']) {
Sh """
ssh -o StrictHostKeyChecking=no ${TARGET_HOST} "pwd"
"""
}
}
}
}
Environment {
TARGET_HOST ="username#ip"
}
}
Can anyone let me know how to connect remote server from Jenkins (server1).
That is, how to do ssh via command line and a job?
sudo ssh user#server2
For freestyle jobs, you would use the Jenkins SSH plugin.
For pipelines, you have pipeline SSH steps which does the same:
node {
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "ls -lrt"
sshCommand remote: remote, command: "for i in {1..5}; do echo -n \"Loop \$i \"; date ; sleep 1; done"
}
}
I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Below is the Jenkins DSL groovy for setting the Terraform path and retrieving the service principal credentials to run Terraform init and Terraform plan.
When ran against Terraform 12.0 version I get the error below even though I tested using the same Azure service principal credentials mentioned in the pipeline as below using a Jenkins free style job and az login worked fine.
+ terraform init -input=false
[0m[1mInitializing modules...[0m
[0m[1mInitializing the backend...[0m
[31m
[1m[31mError: [0m[0m[1mError building ARM Config: Error populating Client ID from the Azure CLI: No Authorization Tokens were found - please re-authenticate using `az login`.[0m
[0m[0m[0m
pipeline{
agent any
stages{
stage('Set Terraform path') {
steps {
script {
def tfHome = tool name: 'Terraform'
env.PATH = "${tfHome}:${env.PATH}"
}
sh 'terraform version'
}
}
stage('Provision infrastructure') {
steps {
dir('environments/dev')
{
withCredentials([azureServicePrincipal('xx-xxx-subscription-azure-sp')]) {
sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
sh 'terraform init -input=false'
sh 'terraform plan -out=tfplan -input=false'
}
// sh ‘terraform destroy -auto-approve’
}
}
}
}
}