We are about to apply SCDF along with K8s but not sure whether we should deploy
all (or partially) the components (SCDF, DB, broker) on k8s cluster or we can use all the 3 components separately and just use k8s to deploy the data pipelines only.
thanks
Related
I have a basic question about deploying Spring tasks/batch jobs on SCDF Kubernetes. Now if I deploy the SCDF on Kubernetes and then schedule a batch job, which Kubernetes cluster is the batch job deployed on? where is the pod created? the same cluster where the SCDF server is running?
By default the apps are deployed on the same cluster and namespace as the SCDF server but this is configurable. You can configure any number of target “platforms”. Each platform is essentially a set of deployment properties keyed to a logical name. You can set the platform name as a parameter on which each task is launched. This is described here
My requirement is to trigger a CI & CD on a kubernetes on prem infra, whenever a PR has been raised. Jenkins X is an ideal candidate but unfortunately due to few proxy issues it didnt come to fruitition.
Coming to kubernetes-operator, looking for few clarifications.
I've 4 nodes cluster, with one node being the leader.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Do I have to do any addtional configurations to enable master - slave set up.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X ?
Thanks in advance.
As per your comments you are actually interessted in more of a cloud-native solution to operating a Jenkins, so here goes.
Since you already have a Kubernetes cluster and would like to use the Jenkins Kubernetes operator, then I would recommend you use the Jenkins Kubernetes Plugin for managing your workloads.
The Jenkins Kubernetes plugin enables you to run each of your pipelines in a separate Pod in your Kubernetes cluster, and once the required Service resources are setup, then the communication between master and slave pods is completely regulated by the plugin. I would recommend that you look into their documentation, which is quite good ( in comparison to other plugins ).
Now since you are also using the Jenkins Kubernetes operator you should know that the plugin is installed as one of the default plugins and is available as soon as your Jenkins instance is spun up. I would recommend you read through the Jenkins Kubernetes operator documentation to get a better grasp of what happens when that is running.
So now I will move onto your questions.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
If you install the Jenkins Kubernetes operator via Helm chart, then no, you get a Jenkins master instance included. Otherwise if you install the controller into your cluster manually, then you will need to create a Jenkins CRD, which will create a Jenkins instance for you.
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Use Ingress + Load Balancer + DNS Service or expose the Pod via NodePort. Note that exposing your master Pod via NodePort may require you to make your Jenkins Master instance publicly available ( and that may not be wise ).
Do I have to do any addtional configurations to enable master - slave set up.
Please refer to the documentation of Jenkins Kubernetes plugin and Jenkins Kubernetes operator. All details are provided there, but configuration is rather minimal.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X
No. The Jenkins Kubernetes operator is there only to manage your Jenkins instance and backups in immutable fashion. Jenkins X can be used in combination with Jenkins, but neither replaces the other completely.
I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?
We currently have multiple Linux systems configured as static dedicated remote agents for our Bamboo project and want to move to using containerized remote agents that are spun up on demand. How can this be done? I see a containerized remote agent but where does this go?
We have containerized our build environment but the remote agent is still running on the dedicated hardware. We want to remove all our dedicated remote agent machines and run everything in containers. I'm an end user with no access to the server so not sure how I can accomplish this. From what I've read probably not in the cloud so guessing an on-prem cluster? New to this concept.
Bamboo provides a per-build_container plug-in for this. The cluster is specified as part of the configuration and the containerized remote agent (sidekick) is specified in the build job and spins up with the job starts. Based on documentations this can be AWS ECS, Docker cluster or Kubernetes.
I have a .NET core web API and Angular 7 app that I need to deploy to multiple client servers, potentially running a plethora of different OS setups.
Dockerising the whole app seems like the best way to handle this, so I can ensure that it all works wherever it goes.
My question is on my understanding of Kubernetes and the distribution of the application. We use Azure Dev Ops for build pipelines, so if I'm correct would it work as follows:
1) Azure Dev Ops builds and deploys the image as a Docker container.
2) Kubernetes could realise there is a new version of the docker image and push this around all of the different client servers?
3) Client specific app settings could be handled by Kubernetes secrets.
Is that a reasonable setup? Have I missed anything? And are there any recommendations on setup/guides I can follow to get started.
Thanks in advance, James
Azure DevOps will perform the CI part of your pipeline. Once it is completed, Azure DevOps will push images to ACR. CD part should be done either directly from Azure DevOps (You may have to install a private agent on your on-prem servers & configure firewall etc) or Kubernetes native CD tools such as Spinnaker or Jenkins-X. Secrets should be kept in Kubernetes secrets.