So I want to start using Jenkins to build my app and then test it and push my image to local repo.
Because I have 2 images to push I would like to use docker-compose, but docker-compose is missing.
I installed Jenkins through Portainer, and I'm using the jenkins/jenkins:lts image.
Is there a way to install docker-compose into the container without having to create my own Dockerfile for it?
My Jenkins pipeline so far is:
node {
stage('Clone repository') {
checkout([$class: 'GitSCM',
branches: [[name: '*/master' ]],
extensions: scm.extensions,
userRemoteConfigs: [[
url: 'repo-link',
credentialsId: 'credentials'
]]
])
}
stage('Build image') {
sh 'cd src/ && docker-compose build'
}
stage('Push image') {
sh 'docker-compose push'
}
}
You can either install docker-compose during image build time (via Dockerfile):
FROM jenkins/jenkins
USER root
RUN curl -L \
"https://github.com/docker/compose/releases/download/1.25.3/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose
USER jenkins
Or you can install docker-compose after the Jenkins container is already running via the same CURL command:
$ sudo curl -L https://github.com/docker/compose/releases/download/1.25.3/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Related
I'm trying to use sshagent plugin to deploy to remote server.
when using below syntax, I'm getting
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip>"
sh "whoami"
}
}
}
}
}
getting output:
[Pipeline] sh (hide)
+ whoami
jenkins
while I'm expecting to run script at the remote server using provided credentials !!
So, I had to run it this way
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <host_ip> 'whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx'"
}
}
}
}
}
Is there any "clean" way to run script on remote server with ubuntu instead of jenkins user ?
Edit:
I understand I need to run it under the ssh command not as separate sh script otherwise, it will run as jenkins and I'm able to do it in the scripted way as below.
That's why I'm asking if there's a better way to write it in the declarative way.
node {
stage('Deploy'){
def dockerRun = "whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx "
sshagent(['nginx-ec2']) {
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip> '${dockerRun}' "
}
}
}
Thanks,
As noted, you should select a credential which does reference the right remote username, as seen in the SSH Agent Jenkins plugin:
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Plus, I would execute only one script which would have the all sequence of commands you want to execute remotely.
Well, so far this is the best way to do so, in-spite of repetition!
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'whoami'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo apt update && sudo apt install -y docker.io'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo usermod -aG docker ubuntu'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'source .bashrc'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'docker run -d -nginx'"
}
}
}
}
}
We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG
I am trying to automate a docker build in Jenkins pipeline. In my dockerfile, I basically build a node application. In my npm install, I have some private git repositories which need os bindings and so have to be installed in the container. When I run this manually, I transfer my ssh keys (id_rsa) to dockerfile which is used for doing npm install. Now, my problem is when running this task in jenkins pipeline, I will be configuring a ssh-agent(Jenkins plugin). It will not be possible to extract private key from ssh-agent. How should I pass my ssh-agent to my dockerfile.
EDIT 1:
I got it partially working by this:
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t $DOCKER_REGISTRY_URL/$IMAGE_NAME:v$BUILD_NUMBER --ssh default . &&
Then in Docker file:
This works fine:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no"
git clone git#github.com:****
Weird thing is this doesn't work:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no" npm install git+ssh//git#github.com:****
I feel this is something to do with StrictHostKeyChecking=no
I finally got it working by using ROOT user in Dockerfile and setting the npm cache to root.
The problem was that git was using the /root/.ssh folder while npm was using a different path - /home/.ssh as it's npm cache was set on /home/.ssh
For anyone still struggling, this is the config I used
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t test --ssh default .
Dockerfile:
USER root
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
openssh-client
RUN mkdir -p -m 600 /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts && echo "Host *\n StrictHostKeyChecking no" > /root/.ssh/config
RUN echo "Check ssh_config" && cat /root/.ssh/config
RUN rm -rf node_modules
RUN npm config set cache /root
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT" npm install
We have several Jenkins agents running as physical machines.
So far I ran my Jenkins pipeline on the agents themselves, I now try to move the building and test execution into docker containers using the Jenins docker plugin.
Below you can find a simplified version of our Jenkinsfile which uses gradle to build, test, and package a Java Spring Boot application.
node {
stage('Preparation') {
cleanWs()
checkout scm
notifyBitbucket()
}
}
pipeline {
agent {
docker {
image "our-custom-registry.com/jenkins-build:latest"
registryUrl 'https://our-custom-registry.com'
registryCredentialsId '...'
alwaysPull true
args "-u jenkins -v /var/run/docker.sock:/var/run/docker.sock" // the pipeline itself required docker
}
}
stages {
stage('Build') {
steps {
sh './gradlew assemble classes testClasses'
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
when { expression { return build_params.ENABLE_UNITTEST } }
steps {
sh './gradlew test'
junit UNIT_TEST_RESULT_DIR
}
}
stage('Integration Tests') {
when { expression { return build_params.ENABLE_INTEGRATION } }
steps {
sh './gradlew integration'
junit INTEGRATION_TEST_RESULT_DIR
}
}
}
}
stage('Finalize') {
stage('Docker Push') {
when { expression { return build_params.ENABLE_DOCKER_PUSH } }
steps {
sh './gradlew pushDockerImage'
}
}
}
}
post {
cleanup {
cleanWs()
}
always {
script {
node {
currentBuild.result = currentBuild.result ?: 'SUCCESS'
notifyBitbucket()
}
}
}
}
}
Below is the Dockerfile I use for the build image. As you can see I manually create a Jenkins user and add the to the docker groups (unfortunately the GID is 998 or 999 depending on the Jenkins agent).
FROM openjdk:8-jdk-stretch
USER root
# prerequisites:
# - a user and a group Jenkins with UID/GID=1001 exist
# - the user home is /var/jenkins
# - the user is in the docker group
# - on some agents docker has the gid 998, on some it is 999
RUN apt-get update \
&& apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common rsync tree \
&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& apt-get -y install docker-ce docker-ce-cli containerd.io \
&& groupadd -g 1001 jenkins \
&& groupadd -f -g 998 docker1 \
&& groupadd -f -g 999 docker2 \
&& useradd -d "/var/jenkins" -u 1001 -g 1001 -m -s /bin/bash jenkins \
&& usermod -a -G 998 jenkins \
&& usermod -a -G 999 jenkins
USER jenkins
Jenkins then executes the following command
docker run -t -d -u 1001:1001 -u jenkins -v /var/run/docker.sock:/var/run/docker.sock -w /var/jenkins/workspace/JOB_NAME -v /var/jenkins/workspace/JOB_NAME:/var/jenkins/workspace/JOB_NAME:rw,z -v /var/jenkins/workspace/JOB_NAME#tmp:/var/jenkins/workspace/JOB_NAME#tmp:rw,z -e ******** ... our-custom-registry.com/base/jenkins-build:latest cat
This pipeline works just fine... sometimes!
Most of the times, however, some files get mysteriously lost.
For example, my build.gradle consists of multiple other files that are included.
At some point during the build, one of these files seems to be missing.
+ ./gradlew pushDockerImage
FAILURE: Build failed with an exception.
* Where:
Build file '/var/****/workspace/JOB_NAME/build.gradle' line: 35
* What went wrong:
A problem occurred evaluating root project 'foo'.
> Could not read script '/var/****/workspace/JOB_NAME/build/gradle/scripts/springboot-plugin.gradle' as it does not exist.
It is always a different file that goes missing.
I started running tree just before the ./gradlew just to make sure the file is not actually removed.
Does anybody have an idea what might be going on here?
Update
Forget everything I said about docker, this is a pure Gradle and Jenkins problem.
When I replace the docker agent with a plain Jenkins agent the same problems occur.
The problem seems to be that you cannot run multiple gradle tasks in parallel on the same directory.
I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.