I am trying to keep always the slave pod running. Unfortunately using Kubernetes agent inside the pipeline, I am still struggling with adding "podRetention" as always
For a declarative pipeline you would use idleMinutes to keep the pod longer
pipeline {
agent {
kubernetes {
label "myPod"
defaultContainer 'docker'
yaml readTrusted('kubeSpec.yaml')
idleMinutes 30
}
}
the idea is to keep the pod alive for a certain time for jobs that are triggered often, the one watching master branch for instance. That way if developers are on rampage pushing on master, the build will be fast. When devs are done we don't need the pod to be up forever and we don't want to pay extra resources for nothing so we let the pod kill itself
Related
I have a pod template declared in configure clouds section and I am using jenkins/inbound-agent:4.3-4 image, build agents are coming up fine but they are coming up with just one executor, is there a way I can increase that number?
The reason I would like to increase the number of executors is, I want to create a job which triggers other jobs sequentially and I want to all the downstream projects to run on the same agent as the main job.
I don't see any option in configure cloud section, any heap or clue on workarounds is appreciated.
I met the same issue with the same situation.
I found a post which similiar with this situation,maybe the kubernetes plugin is as same as amazon-ecs plugin,they both hard coded the executors value as 1.
Jenkins inbound-agent container via ecs/fargate plugin - set # executors for node
So, run pipeline steps one by one is the only way I know to avoid this issue.
If you need call other job,set wait:false would be work, like this
build(job: "xxxx", wait: false, parameters: [string(name: "key", value: "value")])
I switched from Deployment Configs to Deployments in Openshift 4.
In my Jenkins pipelines I had a step for rolling out the DeploymentConfig which looked like this:
openshift.withCluster() {
openshift.withProject("project") {
def rm = openshift.selector("dc", app).rollout()
timeout(5) {
openshift.selector("dc", app).related('pods').untilEach(1) {
return (it.object().status.phase == "Running")
}
}
}
}
In the openshift-jenkins-plugin there doesn't seem to be an option to rollout a deployment. Deployment is also a native kubernetes object as far as I know opposed to the Openshift Object DeploymentConfig.
What would be an easy way to rollout a deployment in Openshift 4 in Jenkins?
You can use the Kubectl with the Jenkin
kubectl rollout undo deployment/abc
you can use the kubectl
node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'user1', serverUrl: 'https://api.k8s.my-company.com']) {
sh 'kubectl apply -f my-kubernetes-directory'
}
}
}
You can rollout the deployment as per need using the command line and if you can make or pass the variables also.
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_rollout/
Deployments do not natively have a rollout or redeploy option like DeploymentConfigs.
However, if you make a change in the Deployment yaml, a redeploy is done with the new configuration laid in. Using that idea, you can make a change in the Deployment's configuration that doesn't actually change your project and trigger that when you want a clean slate.
oc patch deployment/{Your Deployment name here} -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"last-restart\":\"`date +'%s'`\"}}}}}"
That command makes an annotation named "last-restart" and sets a value of the current timestamp that you run it. If you run it again, it replaces the timestamp with an updated value. Both of these actions are a change enough to trigger a redeployment, but not a functional change to your project.
Still waiting to schedule task
‘docker’ is offline
JenkinsFile:
pipeline {
agent {
node {
label 'docker'
customWorkspace "workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
...
What is the cause of this error, and how can I diagnose it further?
I don't see any related containers running via docker ps.
Relating to the agent named docker. They can be viewed on the "Build executor status" / "Nodes" / https://jenkins-url.local/computer/
If you use node {} with a specific label, and don't have any nodes with that label set-up, the build will be stuck forever. You also need to make sure you have at least 2 executors set-up when using a single node (like 'master'), otherwise pipeline builds will usually be stuck, as they consist of a root build and several sub-builds for the steps.
I'm using Jenkins version 2.190.2 and Kubernetes plugin 1.19.0
I have this jenkins as master at kubernetes cluster at AWS.
This jenkins have kubernetes plugin configured and it's running ok.
I have some pod templates and containers configured that are running.
I'm able to run declarative pipelines specifying agent and container.
My problem is that I'm unable to run jobs parallel.
When more than one job is executed at same time, first job starts, pod is created and execute stuff. The second job waits to the first jobs ends, even if use different agents.
EXAMPLE:
Pipeline 1
pipeline {
agent { label "bash" }
stages {
stage('init') {
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
Pipeline 2
pipeline {
agent { label "bash2" }
stages {
stage('init') {
steps {
container('bash2') {
echo 'bash2'
sleep 300
}
}
}
}
}
This is the org.csanchez.jenkins.plugins.kubernetes log. I've uploaded to wetransfer -> we.tl/t-ZiSbftKZrK
I've read a lot of this problem and I've configured jenkins start with this JAVA_OPTS but problem is not solved.
-Dhudson.slaves.NodeProvisioner.initialDelay=0
-Dhudson.slaves.NodeProvisioner.MARGIN=50
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
Kubernetes plugin is configured with:
Kubernetes cloud / Concurrency Limit = 50. I've configured without value but the problem still occurs
Kubernetes cloud / Pod retention = never
Pod template / Concurrency Limit without value. I've configured with 10 but the problem still occurs
Pod template / Pod retention = Default
What configuration I'm missing or what errors i'm doing?
Finally I've solved my problem due to another problem.
We started to get errors at create normal pods because our kubernetes node at aws hadn't enough free ip's. Due to this error we scaled our nodes and now jenkins pipelines can be running parallel with diferents pods and containers.
your pods are created in parallel
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash-4wjrk
...
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash2-3rxck
but your bash2 pod is failing with
Caused by: java.net.UnknownHostException: jenkins-jnlp.default.svc.cluster.local
You should use Parallel Stages. Which you can find described in the Jenkins documentation for pipeline syntax.
Stages in Declarative Pipeline may declare a number of nested stages within a parallel block, which will be executed in parallel. Note that a stage must have one and only one of steps, stages, or parallel. The nested stages cannot contain further parallel stages themselves, but otherwise behave the same as any other stage, including a list of sequential stages within stages. Any stage containing parallel cannot contain agent or tools, since those are not relevant without steps.
In addition, you can force your parallel stages to all be aborted when one of them fails, by adding failFast true to the stage containing the parallel. Another option for adding failfast is adding an option to the pipeline definition: parallelsAlwaysFailFast()
An example pipeline might look like this:
Jenkinsfile
pipeline {
agent none
stages {
stage('Run pod') {
parallel {
stage('bash') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
stage('bash2') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
}
}
Using the kubernetes-plugin how does one build an image in a prior stage for use in a subsequent stage?
Looking at the podTemplate API it feels like I have to declare all my containers and images up front.
In semi-pseudo code, this is what I'm trying to achieve.
pod {
container('image1') {
stage1 {
$ pull/build/push 'image2'
}
}
container('image2') {
stage2 {
$ do things
}
}
}
Jenkins Kubernetes Pipeline Plugin initializes all slave pods during Pipeline Startup. This also means that all container images which are used within the pipeline need to be available in some registry. Probably you can give us more context what you try to achieve, maybe there are other solutions for your problem.
There are for sure ways to dynamically create a pod from a build container and connect it as slave during buildtime but I feel already that this approach is not solid and will bring some complications.