I set up a Jenkins build that uses the "Publish over SSH" plugin to remotely execute an ansible script, injecting the variables into the call to ansible-playbook
The command that Jenkins will remotely execute:
ansible-playbook /home/username/test/test.yml --extra-vars "ui_version=$UI_VERSION web_version=$WEB_VERSION git_release=$GIT_RELEASE release_environment=$RELEASE_ENVIRONMENT"
Which is triggered by the following curl:
curl -k --user username:secretPassword -v -X POST https://jenkins/job/Ansible_Test/buildWithParameters?UI_VERSION=abc&WEB_VERSION=def&GIT_RELEASE=ghi&RELEASE_ENVIRONMENT=jkl
Which should be utilizing the following variables:
My Problem: only the first parameter gets injected, as you can see on the longest line of the console output on Jenkins below:
...
SSH: EXEC: completed after 201 ms
SSH: Opening exec channel ...
SSH: EXEC: channel open
SSH: EXEC: STDOUT/STDERR from command [ansible-playbook /home/dholt2/test/test.yml --extra-vars "ui_version=abc web_version= git_release= release_environment="] ...
SSH: EXEC: connected
...
It turns out that the terminal was trying to interpret the & after the first parameter, as mentioned here. Quoting the URL resulted in a successful transmission and variable injection.
I should've known it was the cause when the command waited for more input.
Related
I am using Jenkins to run some Ansible playbooks. One of the simple tests I did was to have the playbook to cat the fstab file on a remote server:
The playbook looks like this:
---
- hosts: "tesst-1-server"
tasks:
- name: dislpay /etc/fstab
shell: cat /etc/fstab
register: fstab_reg
- debug: msg="{{ fstab_reg.stdout }}"
In Jenkins, I have a freestyle project, it uses Invoke Ansible Playbook to call the above playbook, and the project credentials was setup with a different: ansible-user. This is different from the default user-jenkins that runs Jenkins. User ansible-user can ssh to all my servers. I have ansible-user setup in Jenkins Credential with its private key and passphrase. But when I run the project, I got an error:
[update_fstab] $ /usr/bin/ansible-playbook google/ansible/test-scripts/test/sub_book.yml -i /etc/ansible/hosts -f 5 --private-key /tmp/ssh14117407503194058572.key -u ansible-user
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
fatal: [test-1-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansible-user#test-1-server: Permission denied (publickey).", "unreachable": true}
I am not quiet sure what exactly the error is saying as I have setup the private key and passphrase to ansible-user's credentials. What does the group names in the message mean? Because this is done through Jenkins, I am not sure how to do the -vvv as it suggested.
How can I make Jenkins to pass the private key and passphrase to the Ansible playbook?
Thanks!
I think I have found the "issue". After I switched to a different user other than ansible-user, the playbook worked. Interesting thing is that when I created the private key pairs for ansible-user, I used "-m PEM" and it should be good for Jenkins.
I am trying to deploy using Capistrano 3.x.
I configured agent forwarding in my ~/.ssh/config file:
Host git-codecommit.*.amazonaws.com
Hostname xxxx
ForwardAgent yes
IdentityFile /path/to/codecommit_rsa
I did the same thing for my server connection with ForwardAgent yes also.
I verified my server allows agent forwarding in the /etc/ssh/sshd_config file also:
AllowAgentForwarding yes
INFO ----------------------------------------------------------------
INFO START 2017-11-18 16:09:44 -0500 cap production deploy
INFO ---------------------------------------------------------------------------
INFO [b43ed70f] Running /usr/bin/env mkdir -p /tmp as deploy#50.116.2.15
DEBUG [b43ed70f] Command: /usr/bin/env mkdir -p /tmp
INFO [b43ed70f] Finished in 1.132 seconds with exit status 0 (successful).
DEBUG Uploading /tmp/git-ssh-testapp-production-blankman.sh 0.0%
INFO Uploading /tmp/git-ssh-testapp-production-blankman.sh 100.0%
INFO [b1a90dc1] Running /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh as deploy#50.116.2.15
DEBUG [b1a90dc1] Command: /usr/bin/env chmod 700 /tmp/git-ssh-testapp-production-blankman.sh
INFO [b1a90dc1] Finished in 0.265 seconds with exit status 0 (successful).
INFO [b323707d] Running /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD as deploy#50.116.2.15
DEBUG [b323707d] Command: ( export GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/git-ssh-testapp-production-blankman.sh" ; /usr/bin/env git ls-remote ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/fuweb HEAD )
DEBUG [b323707d] Permission denied (publickey).
DEBUG [b323707d] fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
What am I missing here?
You need to make Capistrano aware that you expect it to forward your local key. This can be done by going into you project's config/deploy.rb and adding this line:
ssh_options[:forward_agent] = true
IIRC, Capistrano executes commands remotely through SSHKit, so even if you invoke the ssh-agent and add a key locally, I can't say if it will persist for the next command.
As discussed in the comments, an SSH agent must run on the remote server as well as on the local machine that contains the key because the agents at each end need to cooperate to forward the key information. The agent (ssh-agent) is different from the SSH server (sshd). The server accepts connections, while the (otherwise optional) agent manages credentials.
Some systems start an agent automatically upon login. To check if this is the case, log in to the server and run:
$ env | grep SSH
...looking for variables like SSH_AGENT_PID or SSH_AGENT_SOCK. If it isn't started, we can execute the following command to start the agent on the server:
$ eval "$(ssh-agent)"
As we can see, this evaluates the output of the ssh-agent command because ssh-agent returns a script that sets some needed environment variables in the session.
We'll need to make sure the agent starts automatically upon login so that it doesn't interfere with the deploy process. If we checked and determined that the agent does not, in fact, start on login, we can add the last command to the "deploy" user's ~/.profile file (or ~/.bash_profile).
Note also that the host specified in the local ~/.ssh/config must match the name or IP address of the host that we want to forward credentials to, not the host that ultimately authenticates using the forwarded key. We need to change:
Host git-codecommit.*.amazonaws.com
...to:
Host 50.116.2.15
We can verify that the SSH client performs agent forwarding by checking the verbose output:
$ ssh -v deploy#50.116.2.15
...
debug1: Requesting authentication agent forwarding.
...
Of course, be sure to register any needed keys with the local agent by using ssh-add (this can also be done automatically when logging in as shown above). We can check which keys the agent loaded at any time with:
$ ssh-add -l
This usually helps me:
ssh-add -D
ssh-agent
ssh-add
In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
I use Jenkins for build and plugin for deploy my artifacts to server. After deploying files I stopped service by calling eec in plugin
sudo service myservice stop
and I receive answer from Publish over SSH:
SSH: EXEC: channel open
SSH: EXEC: STDOUT/STDERR from command [sudo service myservice stop]...
SSH: EXEC: connected
Stopping script myservice
SSH: EXEC: completed after 200 ms
SSH: Disconnecting configuration [172.29.19.2] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [-1]]
Build step 'Send build artifacts over SSH' changed build result to UNSTABLE
Finished: UNSTABLE
The build is failed but the service is stopped.
My /etc/init.d/myservice
#! /bin/sh
# /etc/init.d/myservice
#
# Some things that run always
# touch /var/lock/myservice
# Carry out specific functions when asked to by the system
case "$1" in
start)
echo "Starting myservice"
setsid /opt/myservice/bin/myservice --spring.config.location=/etc/ezd/application.properties --server.port=8082 >> /opt/myservice/app.log &
;;
stop)
echo "Stopping script myservice"
pkill -f myservice
#
;;
*)
echo "Usage: /etc/init.d/myservice {start|stop}"
exit 1
;;
esac
exit 0
Please say me why I get -1 exit status?
Well, the script is called /etc/init.d/myservice, so it matches the myservice pattern given to pkill -f. And because the script is waiting for the pkill to complete, it is still alive and gets killed and returns -1 for that reason (there is also the killing signal in the result of wait, but the Jenkins slave daemon isn't printing it).
Either:
come up with more specific pattern for pkill,
use proper pid-file or
switch to systemd, which can reliably kill exactly the process it started.
On this day and age, I recommend the last option. Systemd is simply lot more reliable than init scripts.
Yes, Jan Hudec is right. I call ps ax | grep myservice in Publish over SSH plugin:
83469 pts/5 Ss+ 0:00 bash -c ps ax | grep myservice service myservice stop
So pkill -f myservice will affect the process with PID 83469 which is parent for pkill. This is -1 status cause as I understand.
I changed pkill -f myservice to pkill -f "java.*myservice" and this solved my problem.
I am not quite familiar with Jenkins but for some reason I am not able to make the perforce plugin to work. I will list down the problem and what I have tried so as to get a better understanding.
Jenkins Version - 1.561
Perforce Plugin Version - 1.3.27 (I have perforce path configured in Jenkins)
System - Ubuntu 10.04
Problem:
In the Source Code Management's Project Details section ( when you try to configure a new job ) I get "Unable to check workspace against depot" error.
P4PORT(hostname:port) - rsh:ssh -q -a -x -l p4ssh -q -x xxx.xxx.com /bin/true
Username - ialok
Password - N.A (Connection to SCM is via key authentication so left it blank)
Workspace(client) - ialok_jenkins
I let Jenkins create workspace and manage its view by checking the checkbox for both "Let Jenkins Create Workspace" and "Let Jenkins Manage Workspace View"
Client View Type is a View Map with the following mapping:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
I have the keys loaded prior to starting jenkins and the jenkins process runs as my user (ialok)
~$ ps aux | grep jenkins
ialok 16608 0.0 0.0 14132 552 ? Ss 11:08 0:00 /usr/bin/daemon --name=ialok --inherit --env=JENKINS_HOME=/var/lib/jenkins --output=/var/log/jenkins/jenkins.log --pidfile=/var/run/jenkins/jenkins.pid -- /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=-1
ialok 16609 1.0 13.9 1448716 542156 ? Sl 11:08 1:04 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=-1
Additionally, I used envInject plugin and "Under Prepare an environment for the run" I added SSD_AGENT_PID, SSH_AUTH_SOCK, P4USER, P4PORT environment parameters. (I did try without envInject but faced the same issue)
It looks like some authentication problem as I double checked the path to p4 executable along with the project mapping and addition of keys to my environment.
Here is the log file indicating a failed run:
Started by user anonymous
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content
P4CONFIG=.perforce
P4PORT=rsh:ssh -q -a -x -l p4ssh -q -x xxx.xxx.com /bin/true
P4USER=ialok
SSH_AGENT_PID=25752
SSH_AUTH_SOCK=/tmp/keyring-7GAS75/ssh
[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building in workspace /var/lib/jenkins/jobs/fin/workspace
Using master perforce client: ialok_jenkins
[workspace] $ /usr/bin/p4 workspace -o ialok_jenkins
Changing P4 Client Root to: /var/lib/jenkins/jobs/fin/workspace
Changing P4 Client View from:
Changing P4 Client View to:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
Saving new client ialok_jenkins
[workspace] $ /usr/bin/p4 -s client -i
Caught exception communicating with perforce. TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
com.tek42.perforce.PerforceException: TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
at com.tek42.perforce.parse.AbstractPerforceTemplate.saveToPerforce(AbstractPerforceTemplate.java:270)
at com.tek42.perforce.parse.Workspaces.saveWorkspace(Workspaces.java:77)
at hudson.plugins.perforce.PerforceSCM.saveWorkspaceIfDirty(PerforceSCM.java:1787)
at hudson.plugins.perforce.PerforceSCM.checkout(PerforceSCM.java:895)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1251)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:513)
at hudson.model.Run.execute(Run.java:1709)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
ERROR: Unable to communicate with perforce. TCP receive failed. read: socket: Connection reset by peer
For Command: /usr/bin/p4 -s client -i
With Data:
===================
Client: ialok_jenkins
Description:
Root: /var/lib/jenkins/jobs/fin/workspace
Options: noallwrite clobber nocompress unlocked nomodtime rmdir
LineEnd: local
View:
//sandbox/srkamise/... //ialok_jenkins/srkamise/...
===================
Finished: FAILURE
The P4PORT typically is of the form 'hostname.port'. Examples would be:
workshop.perforce.com:1666
myserver.mycompany.net:2500
Here's some docs: http://www.perforce.com/perforce/doc.current/manuals/cmdref/P4PORT.html