How to get/access HTTP Proxy Configuration across all jenkins project? - jenkins

I am new on Jenkins ver. 2.204.5. I have set HTTP Proxy Configuration on my jenkins as below
When I am building my project at that time I want HTTP Proxy Configuration information like proxy Server, Port, User name, Password on Execute Shell. Please help me on it.
I want to HTTP Proxy Configuration on my shell to run below command:
npm config set proxy http://<username><password>#proxy-server-url>:<port>
npm config set https-proxy http://<username><password>#proxy-server-url>:<port>

You can access the variables in your pipeline script like below,
pipeline {
agent any
stages {
stage('Preparation') {
steps {
script {
def p = jenkins.model.Jenkins.getInstance().proxy
env['http_proxy'] = "http://${p.name}:${p.port}"
env['https_proxy'] = env['http_proxy']
env['no_proxy'] = p.noProxyHost
}
}
}
stage('Build') {
steps {
sh 'docker build -t mms_builder_dockerrpm --build-arg http_proxy="${http_proxy}" --build-arg https_proxy="${https_proxy}" --build-arg no_proxy="${no_proxy}" .'
}
}
}

Related

Stuck with Jenkins not building my docker images

I'm running Jenkins as a container and for some reason Im having issues :D.
After the pipeline runs docker build -t testwebapp:latest . I get docker: Exec format error on the Build image stage
The pipeline command docker.build seems to do what it should so something is wrong with my env?
The Jenkins docker-compose include the docker.sock so the running jenkins should be allowed to piggyback of the host docker?
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Pipeline script defined in Jenkins:
pipeline {
agent any
stages {
stage('Initialize Docker') {
steps {
script {
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
}
}
stage('Checkout') {
steps {
git branch: 'main', url: 'github url'
}
}
stage('Build image') {
steps {
script {
docker.build("testwebapp:latest")
}
}
}
}
post {
failure {
script {
currentBuild.result = 'FAILURE'
}
}
}
}
The global tool configuration is pretty standard:
Jenkins global tool config

Jenkinsfile pipeline stage error in gcloud

I have the below pipeline.
pipeline {
agent any
environment {
PROJECT_ID = "*****"
IMAGE = "gcr.io/$PROJECT_ID/node-app"
BRANCH_NAME_NORMALIZED = "${BRANCH_NAME.toLowerCase().replace(" / ", "
_ ")}"
}
stages {
stage('Build') {
steps {
sh ' docker build -t ${IMAGE}:${BRANCH_NAME_NORMALIZED} . '
}
}
stage('Push') {
steps {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
}
sh ' gcloud auth configure-docker '
sh ' docker push $IMAGE:${BRANCH_NAME_NORMALIZED} '
}
}
stage('Deploy') {
steps {
withDockerContainer(image: "gcr.io/google.com/cloudsdktool/cloud-sdk", toolName: 'latest') {
withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials k8s --region us-central1 --project ${DEV_PROJECT}")
sh("kubectl get pods")
}
}
}
}
}
}
In Deploy stage it gives the following error :
gcloud auth activate-service-account --key-file=****
WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2020.02.05]: Permission denied.
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.auth.activate-service-account) Could not create directory [/.config/gcloud]: Permission denied.
Please verify that you have permissions to write to the parent directory.
I can't understand where this command wants to create a directory, docker container or in Host machine?
Have you got any similar problem ?
A better approach would be to Login to GKE via Kubernetes service account with token and using a kubeconfig file instead of activating a google service account.
This has several advantages including Kubernetes RBAC support, controlling blast radius should your credentials be compromised, etc. You can read more about using RBAC Authorization here.
You can set where gcloud stores it's configs using the environment variable CLOUDSDK_CONFIG
environment {
CLOUDSDK_CONFIG = "${env.WORKSPACE}"
}
I had the same problem and that worked for me.

Facing SSH connection issue during jenkins pipeline

I have 2 servers on AWS EC2. I want to deploy our node JS application into both the instances.
My below code is working fine if both the instances are available.
node (label: 'test') {
def sshConn = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server1'
def sshConn1 = 'ssh -i /home/ec2-user/pem/ourpemfile.pem ec2-user#IP for server2'
stage('Checkout from Github')
{
checkout([
$class: 'GitSCM',
*
*
])
}
stage('Build for Node1')
{
echo "Starting to Build..."
sh "$sshConn pm2 stop application || true"
}
stage('Deploy to Node1')
{
echo "Starting Deployment..."
"
}
stage('Build for Node2')
{
echo "Starting to Build..."
sh "$sshConn1 pm2 stop application || true"
}
stage('Deploy to Node2')
{
echo "Starting Deployment..."
}
}
But my use cases is .
if one of the server will stopped then build job must be successful and application should deploy on available instance.
Currently, I am facing timeout error if we stop server1 and run the jenkins job.
Depends on your setup.
1) you can connect your nodes to jenkins as slaves vi ssh-slaves plugin.
And then you can run on your servers via
node('node_label') {
sh('any command here')
}
2) you can use ssh-agent plugin. You can put your private key into Jenkins credentials
3) use retry
retry(3) {
// your code
}
You can check ec2 instances states via aws-cli commands, and depending on theirs states do or not you deployment :
If you want to give it a shot, you'll have to declare your AWS credentials in jenkins using 'CloudBees AWS Credentials' plugin.
and add to your pipeline something like that:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'aV',
secretKeyVariable: 'sV',
credentialsId: 'id_of_your_credentials',]]) {
sh '''
AWS_ACCESS_KEY_ID=${aV}\
AWS_SECRET_ACCESS_KEY=${sV}\
AWS_DEFAULT_REGION=us-east-1\
aws ec2 describe-instances --instance-id --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[?Tags[?Key == 'Name' && contains(Value, 'server1')]].[Tags[3].Value,NetworkInterfaces[0].PrivateIpAddress,InstanceId,State.Name]" --output text
'''
}
Regardless to the AWS cli cmd :
I don't know how you manage your servers, I've assumed that you use a tag 'Name' to identify your servers.
Also, I think you should consider max suggestion and use ssh plugin for managing the configuration, credentials ...etc...
Another option can be using ssh-agent. You have to store private keys in credentials plugin (also possible to configure AWS secrets for that)
and then in your pipeline
https://www.jenkins.io/doc/pipeline/steps/ssh-agent/
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}

Single SSH connection in Jenkins pipeline

I've created my Jenkinsfile for building my project in production and the pipeline looks like this:
pipeline {
agent any
stages {
stage('Pull') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
cd ${SOURCE_FOLDER}/project
git pull
git status
EOF'''
}
}
stage('Composer') {
parallel {
stage('Composer') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project:/app composer/composer:latest install
EOF'''
}
}
stage('Composer 2') {
steps {
sh '''ssh ${SSH_USER}#${SERVER_ADDRESS} <<EOF
docker run --rm -v ${SOURCE_FOLDER}/project/sub:/app
composer/composer:latest install
EOF'''
}
}
}
}
}
}
Is there a way to have all the stages all in one single SSH connection in order to minimise the overhead and the connection number?
I've done all the SSL stuff manually by creating the keys and pasting the public key on the production machine.
You can create a function for the connection and pass the SSH_USER & SERVER_ADDRESS as input parameters to that function. Call this function from all your stages.

Artifactory Plugin Proxy Results in /v1/_ping: Bad Gateway

Why do I get /v1/_ping: Bad Gateway errors when I follow the instructions for using artifactory plugin with docker?
jenkins 2.60.3 with Artifactory Plugin 2.12.2
Enable Build-Info proxy for Docker images on port 9999
jenkins /var/lib/jenkins/secrets/jfrog/certs/jfrog.proxy.crt added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
jfrog nginx self sign cert added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
access to jenkins:9999 open between hosts
/etc/systemd/system/docker.service.d/http-proxy.conf has contained the following with no difference to the tests
[Service]
Environment="HTTP_PROXY=http://jenkins:9999/"
[Service]
Environment="HTTPS_PROXY=https://jenkins:9999/"
Local docker test (docker login 127.0.0.1:9999) results in
Error response from daemon: Login: Bad Request to URI: /v1/users/ (Code: 400; Headers: map[Content-Length:[30] Content-Type:[text/html; chars...
Jenkins test results in com.github.dockerjava.api.exception.BadRequestException: Bad Request to URI: /images/artifactory:<port>/hello-world:latest/json
Errors in Jenkins log
SEVERE: (DISCONNECTED) [id: ..., L:0.0.0.0/0.0.0.0:... ! R:artifactory/...:5000]:
Caught an exception on ProxyToServerConnection
io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
...
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
My virtual repo, its remote and local work when I don't use the jenkins proxy but according to the plugin docs I require jenkins proxy to get the build info I need to CI/CD promotion.
Adding the certs to cacerts is somewhat less effective, if jenkins doesn't use that cert file. I'm unsure if adding a cert to a store requires a restart in jenkins, but it does seem to be the case for tomcat so that's probably just how jenkins works.
Configure jenkins instance to use a private keystore cloudbees doc on keystore
Copy $JENKINS_HOME/secrets/jfrog/certs/jfrog.proxy.crt to /etc/docker/certs.d/:/ca.crt
restart docker
Restart jenkins
test proxy via command line while tailing jenkins log - PASS
docker rmi artifactory:5000/hello-world:latest
docker pull artifactory:5000/hello-world:latest
This should use /etc/systemd/system/docker.service.d/http-proxy.conf HTTP_PROXY and go to jenkins proxy when then goes to the actual artifactory host. The required keys should be found in the store so ssl handshake will be good and v2 api used. If not, you'll see errors in jenkins.log
test helloworld on node via shell
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
sh "uname -a "
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-shelltest"
stage('login') {
sh "docker login ${registry} -u ${ARTIFACTORY_USER} -p ${ARTIFACTORY_PASSWORD}"
}
stage('pull and tag') {
sh "docker pull hello-world"
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
sh "docker push ${tag}"
}
}
}
test helloworld on node via artifactory plugin
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
def server = Artifactory.server "artifactory01"
def artDocker = Artifactory.docker(username: ARTIFACTORY_USER,
password: ARTIFACTORY_PASSWORD)
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-artifactoryTest"
def dockerInfo
stage('pull and tag') {
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
dockerInfo = artDocker.push "${tag}", "docker-local"
}
stage('publish') {
server.publishBuildInfo(dockerInfo)
}
}
}

Resources