I switched from Deployment Configs to Deployments in Openshift 4.
In my Jenkins pipelines I had a step for rolling out the DeploymentConfig which looked like this:
openshift.withCluster() {
openshift.withProject("project") {
def rm = openshift.selector("dc", app).rollout()
timeout(5) {
openshift.selector("dc", app).related('pods').untilEach(1) {
return (it.object().status.phase == "Running")
}
}
}
}
In the openshift-jenkins-plugin there doesn't seem to be an option to rollout a deployment. Deployment is also a native kubernetes object as far as I know opposed to the Openshift Object DeploymentConfig.
What would be an easy way to rollout a deployment in Openshift 4 in Jenkins?
You can use the Kubectl with the Jenkin
kubectl rollout undo deployment/abc
you can use the kubectl
node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'user1', serverUrl: 'https://api.k8s.my-company.com']) {
sh 'kubectl apply -f my-kubernetes-directory'
}
}
}
You can rollout the deployment as per need using the command line and if you can make or pass the variables also.
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_rollout/
Deployments do not natively have a rollout or redeploy option like DeploymentConfigs.
However, if you make a change in the Deployment yaml, a redeploy is done with the new configuration laid in. Using that idea, you can make a change in the Deployment's configuration that doesn't actually change your project and trigger that when you want a clean slate.
oc patch deployment/{Your Deployment name here} -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"last-restart\":\"`date +'%s'`\"}}}}}"
That command makes an annotation named "last-restart" and sets a value of the current timestamp that you run it. If you run it again, it replaces the timestamp with an updated value. Both of these actions are a change enough to trigger a redeployment, but not a functional change to your project.
Related
I have a Jenkins pipeline which runs an application in cloud using Kubernetes plugin.
So far, I have a simple yaml file which configures a pod. The Jenkins pipeline creates a pod and it does some operations (It's parsing some data).
I've created a service (with 1 replica) which I deployed and I want to use that in Jenkins instead of creating the same pod everytime I run.
Can someone please advise how to do that?
Currently this is how I am running the pipeline:
stage('Parse logs') {
agent {
kubernetes {
cloud 'sandbox'
label 'log-parser'
yamlFile 'jenkins/logparser.yaml'
}
}
when {
beforeAgent true
expression { params.parse_logs }
}
steps {
container('log-parser'') {
sh '/usr/local/openjdk-11/bin/java -jar /opt/log-parser/log-parser.jar --year=$year --month=$month --day=$day --hour=$hour
}
}
}
Can you please advise how to use the created service 'log-parser' instead of creating a pod everytime I run the pipeline?
When using docker in jenkins pipeline got a problem: "docker: command not found"
How can I start docker on OpenShift Jenkins?
Also, got no permissions to open plugins page in Jenkins.
The easiest way is to use jenkins installation provided by OpenShift
The doc for for jenkins in OpenShift is here : https://docs.openshift.com/container-platform/4.8/openshift_images/using_images/images-other-jenkins.html
To build docker images, the proposed solution is to use "source-to-image" (s2i) and/or use"BuildConfig"to manege "docker builds"
The doc also explains how to manage jenkins authentication/authorizations with jenkins integrated with OCP (OAuth + role bindings to OpenShift)
Bonus points: jenkins is configured with all the required plugins (k8s, OCP...) and it is automatically upgraded when you upgrade OCP
Using Jenkins on OpenShift, you won't find any Docker runtime on your Jenkins master - and probably not on the slaves either: cri-o being the preferred runtime (and only runtime, as of 4.x).
If you want to start containers, you're supposed to use any of the OpenShift or Kubernetes plugins: creating Pods. In OpenShift, you won't need to configure or install anything on Jenkins itself - aside from writing your pipeline. As long as your Jenkins ServiceAccount has edit or admin privileges over a Project, then it would be able to manage containers in the corresponding Namespace.
pipeline {
agent {
node { label 'maven' }
}
stages {
stage('clone') {
// clone the repository serving your YAMLs
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
try {
timeout(5) {
objectsFromTemplate = openshift.process("-f", "/path/to/clone/directory/deploy.yaml", '-p', "FRONTNAME=${params.buildHash}",
'-p', "LDAP_IMAGE_TAG=${params.buildHash}", '-p', "ROOT_DOMAIN=${params.rootDomain}")
echo "The template will create ${objectsFromTemplate.size()} objects"
for (o in objectsFromTemplate) { o.metadata.labels["${templateSel}"] = "${templateMark}-${params.buildHash}" }
created = openshift.create(objectsFromTemplate)
created.withEach { echo "Created ${it.name()} from template with labels ${it.object().metadata.labels}" }
}
} catch(e) {
// DoSomething
}
}
}
}
}
}
}
}
If you don't have full access to your Jenkins admin interface, check the RoleBindings in that namespace: maybe you're not admin?
It won't prevent you from creating your own podTemplates (using ConfigMaps), Credentials (using Secrets) or jobs/jenkinsFiles (using BuildConfigs). While you may install additional plugins to Jenkins, adding/changing the content of the INSTALL_PLUGINS environment variable, in your Jenkins deployment.
Also note that the Jenkins ServiceAccount Token could be used authenticating against Jenkins as admin, working around Jenkins OpenShift OAuth integration.
I am trying to keep always the slave pod running. Unfortunately using Kubernetes agent inside the pipeline, I am still struggling with adding "podRetention" as always
For a declarative pipeline you would use idleMinutes to keep the pod longer
pipeline {
agent {
kubernetes {
label "myPod"
defaultContainer 'docker'
yaml readTrusted('kubeSpec.yaml')
idleMinutes 30
}
}
the idea is to keep the pod alive for a certain time for jobs that are triggered often, the one watching master branch for instance. That way if developers are on rampage pushing on master, the build will be fast. When devs are done we don't need the pod to be up forever and we don't want to pay extra resources for nothing so we let the pod kill itself
Using the kubernetes-plugin how does one build an image in a prior stage for use in a subsequent stage?
Looking at the podTemplate API it feels like I have to declare all my containers and images up front.
In semi-pseudo code, this is what I'm trying to achieve.
pod {
container('image1') {
stage1 {
$ pull/build/push 'image2'
}
}
container('image2') {
stage2 {
$ do things
}
}
}
Jenkins Kubernetes Pipeline Plugin initializes all slave pods during Pipeline Startup. This also means that all container images which are used within the pipeline need to be available in some registry. Probably you can give us more context what you try to achieve, maybe there are other solutions for your problem.
There are for sure ways to dynamically create a pod from a build container and connect it as slave during buildtime but I feel already that this approach is not solid and will bring some complications.
To clarify, this is not a question about running Jenkins in Kubernetes, this is about deploying to Kubernetess from Jenkins.
I have recently settled on using Jenkins (and the workflow/pipeline plugin) to orchestrate our delivery process. Currently, I'm using the imperative style to deploy as per below:
stage 'Deploy to Integ'
// Clean up old releases
sh "kubectl delete svc,deployment ${serviceName} || true"
def cmd = """kubectl run ${serviceName} --image=${dockerRegistry}/${serviceName}:${env.BUILD_NUMBER} --replicas=2 --port=${containerPort} --expose --service-overrides='{ "spec": { "type": "LoadBalancer" }}' """
// execute shell for the command above
sh cmd
This works well because the ${env.BUILD_NUMBER} persists through the pipeline, making it easy for me to ensure the version I deploy is the same all the way through. The problem I have is that I would like to use the declarative approach as this isn't scalable, and I would like the definition in VCS.
Unfortunately, the declarative approach comes with the adverse effect of needing to explicitly state the version of the image (to be deployed) in the yaml. One way around this might be to use the latest tag, however this comes with its own risks. For example, lets take the scenario where I'm about to deploy latest to production and a new version gets tagged latest. The new latest may not have gone through testing.
I could get into changing the file programmatically, but that feels rather clunky, and doesn't help developers who have the file checked out to understand what is latest.
What have you done to solve this issue? Am I missing something obvious? What workflow are you using?
In my yaml file (server.origin.yml), I set my image as image-name:$BUILD_NUMBER
Then I run: envsubst < ./server.origin.yml > ./server.yml
This command will replace the string $BUILD_NUMBER by the value of the environment variable