I have a Jenkins pipeline which runs an application in cloud using Kubernetes plugin.
So far, I have a simple yaml file which configures a pod. The Jenkins pipeline creates a pod and it does some operations (It's parsing some data).
I've created a service (with 1 replica) which I deployed and I want to use that in Jenkins instead of creating the same pod everytime I run.
Can someone please advise how to do that?
Currently this is how I am running the pipeline:
stage('Parse logs') {
agent {
kubernetes {
cloud 'sandbox'
label 'log-parser'
yamlFile 'jenkins/logparser.yaml'
}
}
when {
beforeAgent true
expression { params.parse_logs }
}
steps {
container('log-parser'') {
sh '/usr/local/openjdk-11/bin/java -jar /opt/log-parser/log-parser.jar --year=$year --month=$month --day=$day --hour=$hour
}
}
}
Can you please advise how to use the created service 'log-parser' instead of creating a pod everytime I run the pipeline?
Related
I am trying to use Playwright docker image in Jenkins. In the official documentation, they give an example of how to use Docker plugin:
pipeline {
agent { docker { image 'mcr.microsoft.com/playwright:v1.25.0-focal' } }
stages {
stage('e2e-tests') {
steps {
// Depends on your language / test framework
sh 'npm install'
sh 'npm run test'
}
}
}
}
However, it is not a possibility for me to use the Docker plugin and I have to use pod templates instead. Here is the setting that I am using:
With this setting, I can see the pod is running by running commands in the pod terminal, however, I get this messages in the logs in Jenkins and it eventually timeout and the agents gets suspended.
Waiting for agent to connect (30/100):
What do I need to change in pod/container template config?
I am new to Tekton pipeline. I am migrating Jenkins file from Tekton pipeline
I do have the follow jenkinsfile codes
pipeline {
libraries {
lib 'operator-pipeline-library'
}
parameters {
string(name: 'OPERATOR_CATALOG_REPO', description: ' operator images that are in the catalog', defaultValue: 'operator-3rdparty-dev-local.net')
string(name: 'OPERATOR_INDEX', description: 'Artifactory ', defaultValue: 'operator-3rdparty-catalog-dev-local.net')
}
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-mirror-pod.yaml'
}
}
}
I do want to know how do I re-write the following
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-pod.yaml'
}
}
in Tekton pipelines
When I want to do something with Tekton, the first thing I would recommend checking is the Tekton Catalog. They have a GitHub repository.
In your case, they have the openshift-client task that could be useful.
Depending on how your Jenkins was configured, you may have to reproduce its credentials configuration as Kubernetes Secrets - or executing commands against local cluster, make sure your PipelineRun uses a ServiceAccount with enough privileges to create objects.
As a more generic answer: how do I translate Jenkins groovy into Tekton: you don't. You rewrite everything from scratch. And while it may be the beginning of a long road: at the end of it, you're done with groovy. You can script in python, ruby, shell, whatever runtime you have in your pods. All of which would be better options, in terms of performance, maintenance, ...
When using docker in jenkins pipeline got a problem: "docker: command not found"
How can I start docker on OpenShift Jenkins?
Also, got no permissions to open plugins page in Jenkins.
The easiest way is to use jenkins installation provided by OpenShift
The doc for for jenkins in OpenShift is here : https://docs.openshift.com/container-platform/4.8/openshift_images/using_images/images-other-jenkins.html
To build docker images, the proposed solution is to use "source-to-image" (s2i) and/or use"BuildConfig"to manege "docker builds"
The doc also explains how to manage jenkins authentication/authorizations with jenkins integrated with OCP (OAuth + role bindings to OpenShift)
Bonus points: jenkins is configured with all the required plugins (k8s, OCP...) and it is automatically upgraded when you upgrade OCP
Using Jenkins on OpenShift, you won't find any Docker runtime on your Jenkins master - and probably not on the slaves either: cri-o being the preferred runtime (and only runtime, as of 4.x).
If you want to start containers, you're supposed to use any of the OpenShift or Kubernetes plugins: creating Pods. In OpenShift, you won't need to configure or install anything on Jenkins itself - aside from writing your pipeline. As long as your Jenkins ServiceAccount has edit or admin privileges over a Project, then it would be able to manage containers in the corresponding Namespace.
pipeline {
agent {
node { label 'maven' }
}
stages {
stage('clone') {
// clone the repository serving your YAMLs
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
try {
timeout(5) {
objectsFromTemplate = openshift.process("-f", "/path/to/clone/directory/deploy.yaml", '-p', "FRONTNAME=${params.buildHash}",
'-p', "LDAP_IMAGE_TAG=${params.buildHash}", '-p', "ROOT_DOMAIN=${params.rootDomain}")
echo "The template will create ${objectsFromTemplate.size()} objects"
for (o in objectsFromTemplate) { o.metadata.labels["${templateSel}"] = "${templateMark}-${params.buildHash}" }
created = openshift.create(objectsFromTemplate)
created.withEach { echo "Created ${it.name()} from template with labels ${it.object().metadata.labels}" }
}
} catch(e) {
// DoSomething
}
}
}
}
}
}
}
}
If you don't have full access to your Jenkins admin interface, check the RoleBindings in that namespace: maybe you're not admin?
It won't prevent you from creating your own podTemplates (using ConfigMaps), Credentials (using Secrets) or jobs/jenkinsFiles (using BuildConfigs). While you may install additional plugins to Jenkins, adding/changing the content of the INSTALL_PLUGINS environment variable, in your Jenkins deployment.
Also note that the Jenkins ServiceAccount Token could be used authenticating against Jenkins as admin, working around Jenkins OpenShift OAuth integration.
I am new to Jenkins. I have a project that runs Django and React on the same port which is running on AWS EC2 machine. My Database runs on RDS. Now I am trying to implement pipeline and automatic deployment using Jenkins. All the tutorials I was reading or watching suggest using AWS CodeDeploy service.
This is my sample Jenkinsfile now that tests my React code. As now the main intention is to do automatic code deployment so I am not thinking about testing python here. I just want whatever my code to be deployed on EC2 automatically
pipeline {
agent any
stages {
stage('Install dependency') {
steps {
sh "yarn install"
}
}
stage('Test project') {
steps {
sh "yarn test"
}
}
stage('Build project') {
steps {
sh "yarn build"
}
}
}
}
I know about AWS CodeDeploy plugin on Jenkins. But it uses S3 and AWS Code Deploy. What I am really trying to achieve is, Can I do something like this
I will build my project in Jenkins and it will send my project automatically to the ec2 machine and make it live.
OR
Can I connect to my ec2 instance and do the same that we do manually. Like - Login to instance, fetch the code from git, merge it, build it, restart the service
I'm using Jenkins version 2.190.2 and Kubernetes plugin 1.19.0
I have this jenkins as master at kubernetes cluster at AWS.
This jenkins have kubernetes plugin configured and it's running ok.
I have some pod templates and containers configured that are running.
I'm able to run declarative pipelines specifying agent and container.
My problem is that I'm unable to run jobs parallel.
When more than one job is executed at same time, first job starts, pod is created and execute stuff. The second job waits to the first jobs ends, even if use different agents.
EXAMPLE:
Pipeline 1
pipeline {
agent { label "bash" }
stages {
stage('init') {
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
Pipeline 2
pipeline {
agent { label "bash2" }
stages {
stage('init') {
steps {
container('bash2') {
echo 'bash2'
sleep 300
}
}
}
}
}
This is the org.csanchez.jenkins.plugins.kubernetes log. I've uploaded to wetransfer -> we.tl/t-ZiSbftKZrK
I've read a lot of this problem and I've configured jenkins start with this JAVA_OPTS but problem is not solved.
-Dhudson.slaves.NodeProvisioner.initialDelay=0
-Dhudson.slaves.NodeProvisioner.MARGIN=50
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
Kubernetes plugin is configured with:
Kubernetes cloud / Concurrency Limit = 50. I've configured without value but the problem still occurs
Kubernetes cloud / Pod retention = never
Pod template / Concurrency Limit without value. I've configured with 10 but the problem still occurs
Pod template / Pod retention = Default
What configuration I'm missing or what errors i'm doing?
Finally I've solved my problem due to another problem.
We started to get errors at create normal pods because our kubernetes node at aws hadn't enough free ip's. Due to this error we scaled our nodes and now jenkins pipelines can be running parallel with diferents pods and containers.
your pods are created in parallel
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash-4wjrk
...
Oct 31, 2019 3:13:30 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
Created Pod: default/bash2-3rxck
but your bash2 pod is failing with
Caused by: java.net.UnknownHostException: jenkins-jnlp.default.svc.cluster.local
You should use Parallel Stages. Which you can find described in the Jenkins documentation for pipeline syntax.
Stages in Declarative Pipeline may declare a number of nested stages within a parallel block, which will be executed in parallel. Note that a stage must have one and only one of steps, stages, or parallel. The nested stages cannot contain further parallel stages themselves, but otherwise behave the same as any other stage, including a list of sequential stages within stages. Any stage containing parallel cannot contain agent or tools, since those are not relevant without steps.
In addition, you can force your parallel stages to all be aborted when one of them fails, by adding failFast true to the stage containing the parallel. Another option for adding failfast is adding an option to the pipeline definition: parallelsAlwaysFailFast()
An example pipeline might look like this:
Jenkinsfile
pipeline {
agent none
stages {
stage('Run pod') {
parallel {
stage('bash') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
stage('bash2') {
agent {
label "init"
}
steps {
container('bash') {
echo 'bash'
sleep 300
}
}
}
}
}
}
}