I'm trying to deploy a build via Jenkins pipeline using agent docker and Ansible playbook but it fails on Gathering Facts stage as shown below:
TASK [Gathering Facts] *********************************************************
fatal: [destination.box.local]: UNREACHABLE! => {"changed": false, "msg": "argument must be an int, or have a fileno() method.", "unreachable": true}
Similar Jenkins pipeline using agent any and Ansible not from docker (local installation) will do the job w/o any hiccups.
Agent section from Jenkins pipeline looks like:
agent {
docker {
image 'artifactory.devbox.local/docker-local/myrepo/jdk8:latest'
args '-v $HOME/.m2:/root/.m2 -v /etc/ansible:/etc/ansible -v $HOME/.ansible/tmp:/.ansible/tmp -v $HOME/.ssh:/root/.ssh'
}
}
Any thought what I need to add to it to let Ansible run a playbook?
PS.
After adding ansible_ssh_common_args='-o StrictHostKeyChecking=no' to the Ansible inventory (or setting host_key_checking = False in the config) I have got that error:
TASK [Gathering Facts] *********************************************************
fatal: [destination.box.local]: UNREACHABLE! => {"changed": false, "msg": "'getpwuid(): uid not found: 700'", "unreachable": true}
fatal: [ansible_ssh_common_args=-o StrictHostKeyChecking=no]: UNREACHABLE! => {"changed": false, "msg": "[Errno -3] Try again", "unreachable": true}
In my case it ended up that Jenkins was running docker agent with specific UID and GID. To get that fixed it required to rebuild that docker image with creating internal Jenkins user with the same UID and GID
For that purpose on top of the Jenkinsfile to crate that docker image I have added:
def user_id
def group_id
node {
user_id = sh(returnStdout: true, script: 'id -u').trim()
group_id = sh(returnStdout: true, script: 'id -g').trim()
}
and then during the build stage I have passed additional arguments to the docker as
--build-arg JenkinsUserId=${user_id} --build-arg JenkinsGroupId=${group_id}
then in the Dockerfile for that build:
FROM alpine:latest
#pick up provided ARGs for the bild
ARG JenkinsUserId
ARG JenkinsGroupId
//do your stuff here
#create Ansible config directory
RUN set -xe \
&& mkdir -p /etc/ansible
#create Ansible tmp directory
RUN set -xe \
&& mkdir -p /.ansible/tmp
#set ANSIBLE_LOCAL_TEMP
ENV ANSIBLE_LOCAL_TEMP /.ansible/tmp
#create Ansible cp directory
RUN set -xe \
&& mkdir -p /.ansible/cp
#set ANSIBLE_SSH_CONTROL_PATH_DIR
ENV ANSIBLE_SSH_CONTROL_PATH_DIR /.ansible/cp
# Create Jenkins group and user
RUN if ! id $JenkinsUserId; then \
groupadd -g ${JenkinsGroupId} jenkins; \
useradd jenkins -u ${JenkinsUserId} -g jenkins --shell /bin/bash --create-home; \
else \
addgroup --gid 1000 -S jenkins && adduser --uid 1000 -S jenkins -G jenkins; \
fi
RUN addgroup jenkins root
# Tell docker that all future commands should run as the appuser user
USER jenkins
and finally update docker agent for the main pipeline which had issue:
agent {
docker {
image 'artifactory.devbox.local/docker-local/myrepo/jdk8:latest'
args '-v $HOME/.m2:/root/.m2 -v $HOME/.ssh:/home/jenkins/.ssh -v /etc/ansible:/etc/ansible -v $HOME/.ansible/tmp:/.ansible/tmp -v $HOME/.ansible/cp:/.ansible/cp'
}
}
Related
I am trying to run a Sonarqube container that gets created as below Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
My sonar link is not accessible , I did confirm on all the network checks like checking its reachability from my Jenkins host and its fine. Only it is the Sonarqube container from where the link is unreachable:
ERROR: SonarQube server [https://sonar.***.com] can not be reached
Below is my Jenkinsfile stage for Sonarqube:
stage('SonarQube') {
agent
{dockerfile { filename 'sonar/Dockerfile'
args '-u root:root'
}
}
steps {
withCredentials([string(credentialsId: 'trl-mtr-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
Plugin 'withCredentials' is used in above snippet of code. I would want to add the network in container just like host.
As a result of browsing I found manual command to do the same and also the docker.image.inside plugin. I still can not consolidate all to be used in my pipeline for sonarqube :
# Start a container attached to a specific network
docker run --network [network] [container]
# Attach a running container to a network
docker network connect [network] [container]
I also created the stage as below but even it seems to be failing:
stage('SonarTests') {
steps{
docker.image('sonar/Dockerfile').inside('-v /var/run/docker.sock:/var/run/docker.sock --entrypoint="" --net bridge')
{
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
Could someone please assist here.
I need to use host ssh key inside docker , for this purpose i have build docker like
docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev .
if we use direct docker command it is working fine , but if I use inside the jenkins pipe-line script getting below error
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 92: expecting '}', found 'ssh_prv_key' # line 92, column 116.
ev:${GIT_COMMIT} "--build-arg ssh_prv_ke
Below step i have used in jenkins pipe-line
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" -f dockerfile-dev ."
And docker file used like below
ARG ssh_prv_key
# Authorize SSH Host
# Add the keys and set permissions
RUN mkdir -p /root/.ssh
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
I solved a similar issue as follow:
Jenkins pipeline
sh "cp ~/.ssh/id_rsa id_rsa"
sh "docker build -t ${service_name}-dev:${GIT_COMMIT} -f dockerfile-dev ."
sh "rm id_rsa"
Dockerfile
# Some instructions...
ADD id_rsa id_rsa
# Now use the "id_rsa" file inside the image...
I've got a composer packages in our company's private repository on BitBucket. To access it I need to use credentials stored in Jenkins. Currently the whole build is based on Declarative Pipeline and Dockerfile. To pass credentials to Composer I need those credentials in build stage to pass them to Dockerfile.
How can I achieve it?
I've tried:
// Jenkinsfile
agent {
dockerfile {
label 'mylabel'
filename '.docker/php/Dockerfile'
args '-v /net/jenkins-ex-work/workspace:/net/jenkins-ex-work/workspace'
additionalBuildArgs '--build-arg jenkins_usr=${JENKINS_CREDENTIALS_USR} --build-arg jenkins_credentials=${JENKINS_CREDENTIALS} --build-arg test_arg=test'
}
}
// Dockerfile
ARG jenkins_usr
ARG jenkins_credentials
ARG test_arg
But the args are empty.
TL;DR
Use jenkins withCredentials([sshUserPrivateKey()]) and echo the private key into id_rsa in the container.
EDITED: Removed the "run as root" step, as I think this caused issues. Instead a jenkins user is created inside the docker container with the same UID as the jenkins user that builds the docker container (no idea if that matters, but we need a user with a home dir so we can create ~/.ssh/id_rsa)
For those that suffered like me... My solution is below. It is NOT ideal as:
it risks exposing your private key in the build logs if you are not careful (the below is careful, but it's easy to forget). (Although with that in mind, it appears extracting jenkins credentials is extremely easy for anyone with naughty intentions?)
So use with caution...
In my (legacy) git project, a simple php app with internal git based composer dependencies, I have
Dockerfile.build
FROM php:7.4-alpine
# install git, openssh, composer... whatever u need here, then:
# create a jenkins user inside the docker image
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
&& mkdir -p /home/jenkins/.ssh \
&& touch /home/jenkins/.ssh/id_rsa \
&& chmod 600 /home/jenkins/.ssh/id_rsa \
&& chown -R jenkins:jenkins /home/jenkins/.ssh
USER jenkins
# I think only ONE of the below are needed, not sure.
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /home/jenkins/.ssh/config \
&& ssh-keyscan bitbucket.org >> /home/jenkins/.ssh/known_hosts
Then in my Jenkinsfile:
def sshKey = ''
pipeline {
agent any
environment {
userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
}
stages {
stage('Prep') {
steps {
script {
withCredentials([
sshUserPrivateKey(
credentialsId: 'bitbucket-key',
keyFileVariable: 'keyFile',
passphraseVariable: 'passphrase',
usernameVariable: 'username'
)
]) {
sshKey = readFile(keyFile).trim()
}
}
}
}
stage('Build') {
agent {
dockerfile {
filename 'Dockerfile.build'
additionalBuildArgs "--build-arg UID=${userId}"
}
}
steps {
// Turn off command trace for next line, as we dont want to log ssh key
sh '#!/bin/sh -e\n' + "echo '${sshKey}' > /home/jenkins/.ssh/id_rsa"
// .. proceed with whatever else, like composer install, etc
To be fair, I think some of the RUN commands in the docker container aren't even necessary, or could be run from the jenkins file? ¯_(ツ)_/¯
There was a similar issue, supposedly fixed in PR 327, with pipeline-model-definition-1.3.9
So start checking the version of your plugin.
But heed also the Dockerfile warning:
It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.
Build-time variable values are visible to any user of the image with the docker history command.
Using buildkit with --secret is a better approach for that.
On doing docker build inside Jenkinsfile,
i.e
docker build -f ./Dockerfile -t datastore:1.0.1 .
I am getting error like
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This is my Jenkinsfile
#!/usr/bin/groovy
node {
checkout scm
// Here are the important data for this build
def DOCKER_REGISTRY = "XXX"
def DATASTORE = "datastore"
def DOCKER_TAG_DATASTORE = "${DOCKER_REGISTRY}/XXX"
def APP_VERSION = "1.0.1"
stage('Build') {
dockerInside('XXX/db-server:1.0.114', '') {
echo "Setting up artifactory location to push docker image ${DATASTORE}:${APP_VERSION}"
sh "docker build -f ./Dockerfile -t ${DATASTORE}:${APP_VERSION} ."
sh "docker tag ${DATASTORE}:${APP_VERSION} ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
withCredentials([
usernamePassword(
credentialsId: CORE_IZ_USER,
usernameVariable: 'LOG',
passwordVariable: 'PAS'
)]) {
// Doing some upload commands (see artifactory or docker upload commands from Jenkins)
sh "docker push ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
echo "Push to ${DOCKER_TAG_DATASTORE}:${APP_VERSION}"
}
}
}
stage('Docker image creation') {
echo "Docker image creation"
}
stage('Docker image upload') {
echo "Docker image upload"
}
}
This is my Dockerfile
FROM XXX/rhel:7.5
USER root
RUN yum -y install gcc && yum install -y git && yum install -y docker
# Install Go
RUN curl -O -s https://dl.google.com/go/go1.10.2.linux-amd64.tar.gz
RUN tar -xzf go1.10.2.linux-amd64.tar.gz -C /usr/local
ENV PATH /usr/local/go/bin:$PATH
ENV GOPATH /gopath
ENV GOBIN /usr/local/go/bin
WORKDIR /gopath/src/XXX
RUN mkdir -p /gopath/src/XXX
ADD . /gopath/src/XXX
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -tags netgo -installsuffix netgo -o ./db-server /gopath/src/XXX/datastore/main.go
ADD ./db-server /db-server
ENTRYPOINT ["/db-server"]
I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}