Why do I get /v1/_ping: Bad Gateway errors when I follow the instructions for using artifactory plugin with docker?
jenkins 2.60.3 with Artifactory Plugin 2.12.2
Enable Build-Info proxy for Docker images on port 9999
jenkins /var/lib/jenkins/secrets/jfrog/certs/jfrog.proxy.crt added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
jfrog nginx self sign cert added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
access to jenkins:9999 open between hosts
/etc/systemd/system/docker.service.d/http-proxy.conf has contained the following with no difference to the tests
[Service]
Environment="HTTP_PROXY=http://jenkins:9999/"
[Service]
Environment="HTTPS_PROXY=https://jenkins:9999/"
Local docker test (docker login 127.0.0.1:9999) results in
Error response from daemon: Login: Bad Request to URI: /v1/users/ (Code: 400; Headers: map[Content-Length:[30] Content-Type:[text/html; chars...
Jenkins test results in com.github.dockerjava.api.exception.BadRequestException: Bad Request to URI: /images/artifactory:<port>/hello-world:latest/json
Errors in Jenkins log
SEVERE: (DISCONNECTED) [id: ..., L:0.0.0.0/0.0.0.0:... ! R:artifactory/...:5000]:
Caught an exception on ProxyToServerConnection
io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
...
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
My virtual repo, its remote and local work when I don't use the jenkins proxy but according to the plugin docs I require jenkins proxy to get the build info I need to CI/CD promotion.
Adding the certs to cacerts is somewhat less effective, if jenkins doesn't use that cert file. I'm unsure if adding a cert to a store requires a restart in jenkins, but it does seem to be the case for tomcat so that's probably just how jenkins works.
Configure jenkins instance to use a private keystore cloudbees doc on keystore
Copy $JENKINS_HOME/secrets/jfrog/certs/jfrog.proxy.crt to /etc/docker/certs.d/:/ca.crt
restart docker
Restart jenkins
test proxy via command line while tailing jenkins log - PASS
docker rmi artifactory:5000/hello-world:latest
docker pull artifactory:5000/hello-world:latest
This should use /etc/systemd/system/docker.service.d/http-proxy.conf HTTP_PROXY and go to jenkins proxy when then goes to the actual artifactory host. The required keys should be found in the store so ssl handshake will be good and v2 api used. If not, you'll see errors in jenkins.log
test helloworld on node via shell
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
sh "uname -a "
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-shelltest"
stage('login') {
sh "docker login ${registry} -u ${ARTIFACTORY_USER} -p ${ARTIFACTORY_PASSWORD}"
}
stage('pull and tag') {
sh "docker pull hello-world"
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
sh "docker push ${tag}"
}
}
}
test helloworld on node via artifactory plugin
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
def server = Artifactory.server "artifactory01"
def artDocker = Artifactory.docker(username: ARTIFACTORY_USER,
password: ARTIFACTORY_PASSWORD)
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-artifactoryTest"
def dockerInfo
stage('pull and tag') {
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
dockerInfo = artDocker.push "${tag}", "docker-local"
}
stage('publish') {
server.publishBuildInfo(dockerInfo)
}
}
}
Related
I want to connect to connect my ap server from Jenkins using ssh.
I made ssh key in ap sever.
(id_rsa, authorized_keys (before id_rsa.pub)
I registered in Jenkins credentials my ssh key and password.
And I running my script, occurring this error
'Jenkins ssh permission denied (publickey, gssapi-keyex,gssapi-with-mic,password)'
I checking all ssh configuration, no problem (maybe..)
Anybody help me ㅠㅠ
This is my pipeline script
pipeline {
Agent any
Stages {
Stage {
Steps {
Sshagent(credentials:['my credential name']) {
Sh """
ssh -o StrictHostKeyChecking=no ${TARGET_HOST} "pwd"
"""
}
}
}
}
Environment {
TARGET_HOST ="username#ip"
}
}
I am new on Jenkins ver. 2.204.5. I have set HTTP Proxy Configuration on my jenkins as below
When I am building my project at that time I want HTTP Proxy Configuration information like proxy Server, Port, User name, Password on Execute Shell. Please help me on it.
I want to HTTP Proxy Configuration on my shell to run below command:
npm config set proxy http://<username><password>#proxy-server-url>:<port>
npm config set https-proxy http://<username><password>#proxy-server-url>:<port>
You can access the variables in your pipeline script like below,
pipeline {
agent any
stages {
stage('Preparation') {
steps {
script {
def p = jenkins.model.Jenkins.getInstance().proxy
env['http_proxy'] = "http://${p.name}:${p.port}"
env['https_proxy'] = env['http_proxy']
env['no_proxy'] = p.noProxyHost
}
}
}
stage('Build') {
steps {
sh 'docker build -t mms_builder_dockerrpm --build-arg http_proxy="${http_proxy}" --build-arg https_proxy="${https_proxy}" --build-arg no_proxy="${no_proxy}" .'
}
}
}
I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins
I have a simple jenkins pipeline build, this is my jenkinsfile:
pipeline {
agent any
stages {
stage('deploy-staging') {
when {
branch 'staging'
}
steps {
sshagent(['my-credentials-id']) {
sh('git push joe#repo:project')
}
}
}
}
}
I am using sshagent to push to a git repo on a remote server. I have created credentials that point to a private key file in Jenkins master ~/.ssh.
When I run the build, I get this output (I replaced some sensitive info with *'s):
[ssh-agent] Using credentials *** (***#*** ssh key)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-cjbm7oVQaJYk/agent.11558
SSH_AGENT_PID=11560
$ ssh-add ***
Identity added: ***
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 11560 killed;
[ssh-agent] Stopped.
[TDBNSSBFW6JYM3BW6AAVMUV4GVSRLNALY7TWHH6LCUAVI7J3NHJQ] Running shell script
+ git push joe#repo:project
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
As you can see, the ssh-agent starts, stops immediately after and then runs the git push command. The weird thing is it did work correctly once but that seemed completely random.
I'm still fairly new to Jenkins - am I missing something obvious? Any help appreciated, thanks.
edit: I'm running a multibranch pipeline, in case that helps.
I recently had a similar issue though it was inside a docker container.
The logs gave the impression that ssh-agent exits too early but actually the problem was that I had forgotten to add the git server to known hosts.
I suggest ssh-ing onto your jenkins master and trying to do the same steps as the pipeline does with ssh-agent (the cli). Then you'll see where the problem is.
E.g:
eval $(ssh-agent -s)
ssh-add ~/yourKey
git clone
As explained on help.github.com
Update:
Here a util to add knownHosts if not yet added:
/**
* Add hostUrl to knownhosts on the system (or container) if necessary so that ssh commands will go through even if the certificate was not previously seen.
* #param hostUrl
*/
void tryAddKnownHost(String hostUrl){
// ssh-keygen -F ${hostUrl} will fail (in bash that means status code != 0) if ${hostUrl} is not yet a known host
def statusCode = sh script:"ssh-keygen -F ${hostUrl}", returnStatus:true
if(statusCode != 0){
sh "mkdir -p ~/.ssh"
sh "ssh-keyscan ${hostUrl} >> ~/.ssh/known_hosts"
}
}
I was using this inside docker, and adding it to my Jenkins master's known_hosts felt a bit messy, so I opted for something like this:
In Jenkins, create a new credential of type "Secret text" (let's call it GITHUB_HOST_KEY), and set its value to be the host key, e.g.:
# gets the host for github and copies it. You can run this from
# any computer that has access to github.com (or whatever your
# git server is)
ssh-keyscan github.com | clip
In your Jenkinsfile, save the string to known_hosts
pipeline {
agent { docker { image 'node:12' } }
stages {
stage('deploy-staging') {
when { branch 'staging' }
steps {
withCredentials([string(credentialsId: 'GITHUB_HOST_KEY', variable: 'GITHUB_HOST_KEY')]) {
sh 'mkdir ~/.ssh && echo "$GITHUB_HOST_KEY" >> ~/.ssh/known_hosts'
}
sshagent(['my-credentials-id']) {
sh 'git push joe#repo:project'
}
}
}
}
}
This ensures you're using a "trusted" host key.