I'm trying to automate deployments using the official ArgoCD docker image (https://hub.docker.com/r/argoproj/argocd/dockerfile)
I've created a declarative jenkins pipeline using the kubernetes plugin for the agents and define the pod using yaml, the container definition looks like this:
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: agent
spec:
containers:
- name: maven
image: maven:slim
command:
- cat
tty: true
volumeMounts:
- name: jenkins-maven-cache
mountPath: /root/.m2/repository
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
...
I'm trying to run commands inside that container, that step in the pipeline looks like this:
stage('Build') {
steps {
container('maven') {
sh 'echo testing' // this works just fine
}
}
}
stage('Deploy') {
steps {
container('argocd') {
sh "echo testing" // this does not work
// more deploy scripts here, once sh works
}
}
}
So I have two containers, one where the sh script works just fine and another where it doesn't. The sh scripts in the "argocd" container just hangs for 5 minutes and then Jenkins kills it, the exit message is:
process apparently never started in /home/jenkins/agent/workspace/job-name#tmp/durable-46cefcae (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I can't do echo a simple string in this particular container.
It works fine in other containers like the official for Maven from Docker, I use to build the spring boot application. I can also run commands directly in the argocd container manually from commandline with docker exec, but jenkins just won't in the pipeline for some reason.
What could it be?
I am running the latest version (1.33) of the durable task plugin.
Update:
Turns out that the image for argo-cd (continuous deployment tool) argoproj/argocd:latest does not include other commands except argocd, so the issue was with the container image I tried to use and not Jenkins itself. My solution was to install the Argo-CD CLI into a custom docker container and use that instead of the official one.
I've just encountered a similar issue with a custom docker image created by myself.
It turns out, I was using USER nobody in Dockerfile of that image and somehow, this way Jenkins agent pod was unable to run cat command or any other shell command from my pipeline script. Running specific container with root user worked for me.
So in your case I would add securityContext: runAsUser: 0 like below.
...
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
securityContext:
runAsUser: 0
...
Kubernetes reference: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
If the issue is Jenkins related here are some things that may help to solve the problem:
Issues with the working directory, if you updated Jenkins from some older version the workdir was /home/jenkins while in the recent versions it should be /home/jenkins/agent
or if you are running it in Windows the path should start with C:\dir and not with /dir
You can try a new clean install with apt-get --purge remove jenkins and then apt-get install jenkins
This is not your case as you run latest version of durable task plugin. But for other people reference versions prior to 1.28-1.30 caused the same issue.
If your Jenkins is clean the issue should be investigated in a different way, it seems that it's not returning an exit code to the sh command and/or the script is executed in a different shell.
I would try to do an sh file to be placed in the working directory of the container
#!/bin/bash
echo "testing"
echo $?
and try to run it with source my_script.sh
or with bash my_script.sh
$? is the exit code of the latest bash operation, having it printed will make sure that your script is terminated correctly. The source command to run the script will make it run in the same shell that is calling it so the shell variables are accessible. Bash command will run it in another subshell instead.
Related
I've created a super simple Docker image. When I use that image in Gitlab through a .gitlab-ci.yml file, the Gitlab-"script:" part gets never executed. It's always:
Executing "step_script" stage of the job script
Cleaning up project directory and file based variables
If I add a "report:" entry to my yml, I get for the last line an "Uploading artifacts for failed job".
It seems as if the bash inside the Docker image is somehow broken, but I don't see how, since I can use docker run MyImage <command> to succesfully run bash commands.
Also, Gitlab lets the pipeline run indefinetly after the last line, never ending it. I never experienced this with other Docker images.
Do I have to modify some rights, or something? I can run e.g. the official gradle Docker image, but not mine, anyone has an idea why?
My simple .gitlab-ci.yml:
image:
name: <... My Image ...>
stages:
- build
build-stage:
stage: build
script:
- echo "Testing echo"
My simple Dockerfile:
FROM ubuntu:20.10
CMD ["bash"]
The problem was that my host system is Apple Silicon, while the target Gitlab server runs on AMD64. So I created linux/arm64, not linux/amd64 images. Setting the system explicitly, like
docker build --platform linux/amd64 -t MyImageName .
fixed it.
A big problem is Gitlab, which just fails, without any notification on the log output like "wrong architecture".
I'm a total newbie when it comes to CI/CD, so I'm asking your pardon in advance for not using the right terms/doing stupid things.
I have a docker-compose file which I can use to start my application with sudo docker-compose up -d. Works fine locally, but I also have a remote Virtual Machine, which I want to use to test my application.
I want to run some tests (I will implement them later on) and deploy the app if everything is ok with every push to my repoistory. I looked into the docs, installed gitlab-runner, and tried this for a .gitlab-ci.yml file:
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
test-job1:
stage: test
script:
- echo "This job tests something"
deploy-prod:
stage: deploy
script:
- echo "Restaring containers.."
- cd /path/too/app/repo
- git pull
- sudo docker-compose down && sudo docker-compose up -d
However, when I tried the docker runner, it gave me an error saying it could not find the path to the directory. I understand that this is caused by the fact that it runs every stage in a separate container. How can I restart my application containers (preferably with compose) on the VM? Is there a better approach to achieve what I want to do?
I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html
I have created a Dockerfile (for a Node JNLP slave which can be used with the Kubernetes Plugin of Jenkins ). I am extending from from the official image jenkinsci/jnlp-slave
FROM jenkinsci/jnlp-slave
USER root
MAINTAINER Aryak Sengupta <aryak.sengupta#hyland.com>
LABEL Description="Image for NodeJS slave"
COPY cert.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install -y nodejs
ENTRYPOINT ["jenkins-slave"]
I have this image saved inside my Pod template (in K8s plugin configuration). Now, when I'm trying to run a build on this slave, I find that two containers are getting spawned up inside the Pod (A screenshot to prove the same.).
My Pod template looks like this:
And my Kubernetes configuration looks like this:
Now if I do a simple docker ps, I find that there are two containers which started up (Why?):
Now, inside the Jenkins Job configuration of Jenkins, whatever I add in the build step, the steps get executed in the first container .
Even if I use the official Node container inside my PodTemplate, the result is still the same:
I have tried to print the Node version inside my Jenkins Job, and the output is "Node not found" . Also, to verify my haunch, I have done a docker exec into my second container and tried to print the Node version. In this case, it works absolutely fine.
This is what my build step looks like:
So, to boil it down, I have two major questions:
Why does two separate (one for JNLP and one with all custom changes) containers start up whenever I fire up the Jenkins Job?
Why is my job running on the first container where Node isn't installed? How do I achieve the desired behaviour of building my project with Node using this configuration?
What am I missing?
P.S. - Please do let me know if the question turns out to be unclear in some parts.
Edit: I understand that this can be done using the Pipeline Jenkins plugin where I can explicitly mention the container name, but I need to do this from the Jenkins UI. Is there any way to specify the container name along with the slave name which I am already doing like this:
The Jenkins kubernetes plugin will always create a JNLP slave container inside the pod that is created to perform the build. The podTemplate is where you define the other containers you need in order to perform your build.
In this case it seems you would want to add a Node container to your podTemplate. In your build you would then have the build happen inside the named Node container.
You shouldn't really care where the Pod runs. All you need to do is make sure you add a container that has the resources you need (like Node in this case). You can add as many containers as you want to a podTemplate. I have some with 10 or more containers for steps like PMD, Maven, curl, etc.
I use a Jenkinsfile with pipelines.
podTemplate(cloud: 'k8s-houston', label: 'api-hire-build',
containers: [
containerTemplate(name: 'maven', image: 'maven:3-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'pmd', image: 'stash.company.com:8443/pmd:pmd-bin-5.5.4', alwaysPullImage: false, ttyEnabled: true, command: 'cat')
],
volumes: [
persistentVolumeClaim(claimName: 'jenkins-pv-claim', mountPath: '/mvn/.m2nrepo')
]
)
{
node('api-hire-build') {
stage('Maven compile') {
container('maven') {
sh "mvn -Dmaven.repo.local=/mvn/.m2nrepo/repository clean compile"
}
}
stage('PMD SCA (docker)') {
container('pmd') {
sh 'run.sh pmd -d "$PWD"/src -f xml -reportfile "$PWD"/target/pmd.xml -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh pmd -d "$PWD"/src -f html -reportfile "$PWD"/target/pmdreport.html -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh cpd --files "$PWD"/src --minimum-tokens 100 --failOnViolation false --language java --format xml > "$PWD"/target/duplicate-code.xml'
}
archive 'target/duplicate-code.xml'
step([$class: 'PmdPublisher', pattern: 'target/pmd.xml'])
}
}
}
Alright so I've figured out the solution. mhang li's answer was the clue but he didn't explain it one bit.
Basically, you need to modify the official Jenkins Slave image found here and modify it to include the changes for your slave as well. Essentially, you are clubbing the JNLP and Slave containers into one and building a combined image.
The modification format will just look like this (picking up from the Dockerfile linked)
FROM jenkins/slave:3.27-1
MAINTAINER Oleg Nenashev <o.v.nenashev#gmail.com>
LABEL Description="This is a base image, which allows connecting Jenkins agents via JNLP protocols" Vendor="Jenkins project" Version="3.27"
COPY jenkins-slave /usr/local/bin/jenkins-slave
**INCLUDE CODE FOR YOUR SLAVE. Eg install node, java, whatever**
ENTRYPOINT ["jenkins-slave"] # Make sure you include this file as well
Now, name the slave container jnlp (Reason - bug). So now, you will have one container that spawns which will be your JNLP + Slave. All in all, your Kubernetes Plugin Pod Template will look something like this. Notice the custom url to the docker image I have put in. Also, make sure you don't include a Command To Run unless you need one.
Done! Your builds should now run within this container and should function exactly like you programmed the Dockerfile!
To set Container Template -> Name as jnlp.
https://issues.jenkins-ci.org/browse/JENKINS-40847
I am trying to set up a testing framework for my kubernetes cluster using jenkins and the jenkins kubernetes plugin.
I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.
Essentially I want to do something like this:
podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) {
node('pod1') {
container('container1') {
// start service 1
}
}
}
podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) {
node('pod2') {
container('container2') {
// start service 2
}
}
stage ('Run test') {
node {
sh 'run something that causes service 1 to query service 2'
}
}
I have two main problems:
Pod lifecycle:
As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?
ContainerTemplate from docker image:
I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?
It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.
The solution to multi-service tests ended up being simply using an pod template that has the google cloud library, and assigning that worker a service-account credential plus a secret key so that it can kubectl commands on the cluster.
Dockerfile for worker, replace "X"s with desired version:
FROM google/cloud-sdk:alpine
// Install some utility functions.
RUN apk add --no-cache \
git \
curl \
bash \
openssl
// Used to install a custom version of kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &&\
chmod +x ./kubectl &&\
mv ./kubectl /usr/local/bin/kubectl
// Helm to manage deployments.
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh &&\
chmod 700 get_helm.sh && ./get_helm.sh --version vX.XX.X
Then in the groovy pipeline:
pipeline {
agent {
kubernetes {
label 'kubectl_helm'
defaultContainer 'jnlp'
serviceAccount 'helm'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: gcloud
image: your-docker-repo-here
command:
- cat
tty: true
"""
}
}
environment {
GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds')
}
stages {
stage('Do something') {
steps {
container('gcloud') {
sh 'kubectl apply -f somefile.yaml'
sh 'helm install something somerepo/somechart'
}
}
}
}
Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests.
NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from GOOGLE_APPLICATION_CREDENTIALS as the gsutils tools will be looking for that environmental variable.