Is there a way to use AWS CDK to set up an index rollup job? AWS documentation has a way to go into OpenSearch dashboards and create an index rollup job under Index Management, but that’s of course manual. By the way, I want to be able to programmatically run queries on that rollup index.
Related
We're setting up multiple more or less static servers in AWS. These are primarily configured via Ansible and that's also the ultimate source of truth when it comes to their existence, grouping, host names and IPs. But then there's Jenkins deploying configuration files to these servers based on new commits added to a git repository.
I'm having an issue with listing the target servers directly in a Jenkinsfile. How shall I proceed? Which are the most common ways of dealing with this?
I understand this is mostly an opinion based topic. But maybe there's a particular Jenkins feature which I don't know about...?
Thank you.
This is very subjective. Following are a few ways to do this.
Store the details somewhere accessible after the Ansible step. Possibly commit to a Github repo and retrieve these details within the Jenkins Job.
Using AWS APIs/CLI to retrieve server details. You can either set up AWS CLI in Jenkins Agent or use something like AWS Step Plugin.
Do an API call to Jenkins after the Ansible script is executed and update the server details in the Job itself.
can anyone point me in the right direction here:
I am using the Environment Dashboard plugin and Job DSL plugin in Jenkins.
I can manually create a view for a job using the Environment Dashboard option. I want to create the Environment Dashboard on the main Dashboard home of jenkins, and not within any folder.
I want to be able to achieve this by defining it within the Job Dsl script. Is this possible and any sample scripts I can use please?
JobDSL provides a way to check all available API options by the URL:
<your-jenkins-url>/plugin/job-dsl/api-viewer/index.html
The list of the options depends on your Jenkins configuration. I see that it is possible to configure it as wrapper for some jobs:
Currently I am trying to deploy one sample micro service developed using Spring Boot using Jenkins and Kubernetes on my on premise server. For that I am already created my Kubernetes resource using Helm chart.
I tested the Helm chart deployment using login in remote machine and in my home directory I created. And using terminal command "helm install" I deployed into kubernetes cluster. And end point is successfully working.
My Confusion
Now only tested from terminal. Now I am trying to add the helm install command in my Jenkins pipeline job. So where I need to keep this helm chart? Need to copy to /var/lib/jenkins directory (Jenkins home directory) ? Or I only need to give the full path in command ?
What is the best practice for saving Helm chart for Jenkins deployment? I am confused about to follow standard way of implementation. I am new to this CI/CD pipeline.
The Helm chart(s) should almost definitely be source controlled.
One reasonable approach is to keep a Helm chart in the same repository as your service. Then when Jenkins builds your project, it will also have the chart available, and can directly run helm install. (Possibly it can pass credentials it owns to helm install --set options to set values during deployment.) This scales reasonably well, since it also means developers can make local changes to charts as part of their development work.
You can also set up a "repository" of charts. In your Jenkins setup one path is just to keep a second source control repository with charts, and check that out during deployment. Some tools like Artifactory also support keeping Helm charts that can be directly deployed without an additional checkout. The corresponding downside here is that if something like a command line or environment variable changes, you need coordinated changes in two places to make it work.
I suggest to follow the below path for SDLC of helm charts and apps they whose deployment they describe:
keep spring boot app source code (incl. Dockerfile) in a dedicated repo (CI process builds docker image out of it)
keep app helm chart repo source code (which references the app image) in a dedicated repo (CI process builds helm chart out of it, tags it with version and pushes it to artifact registry, e.g. Artifactory or Harbor)
To deploy the chart using Jenkins job, you code the necessary steps you would use to deploy helm chart manually in the pipeline.
Modern alternative to the last step would be using GitOps methodology. In that case, you'd only put the latest released chart's tag in GitOps repository. The deployment will be done using GitOps operator.
I know there's a gcloud command for this:
gcloud dataflow jobs list --help
NAME
gcloud dataflow jobs list - lists all jobs in a particular project, optionally filtered by region
DESCRIPTION
By default, 100 jobs in the current project are listed; this can be overridden with the gcloud --project flag, and the --limit flag.
Using the --region flag will only list jobs from the given regional endpoint.
But I'd like to retrieve this list programmatically through Dataflow Java SDK.
The problem I'm trying to solve:
I have a Dataflow pipeline in streaming mode and I want to set the update option (https://cloud.google.com/dataflow/pipelines/updating-a-pipeline) accordingly based on if this job has been deployed or not.
e.g. When I'm deploying this job for the first time, the code shouldn't set this update flag to true since there's no existing job to update (otherwise the driver program will complain and fail to launch); and the code should be able to query the list of running jobs and acknowledge the job's been running and set the update option to update it (otherwise DataflowJobAlreadyExistsException is thrown).
I've found org.apache.beam.runners.dataflow.DataflowClient#listJobs(String) that can achieve this.
I just have this concept that i want to implement on our environment and i just want to ask if there is any way i can do it.
So, I want to configure Jenkins and GKE to automatically provision a pod with the current state of an application, and database, when creating a bug in Jira. Basically I want Jenkins to get triggered by an issue created by an user, in Jira, and then build a replica of the application when the issue got created.
My question is this: Can GKE be manipulated or linked in some way with Jenkins so that it'll create a snapshot of the App + Database and then create a new Pod based on those 2 snapshots?
The workflow should be something like this:
Jira => Jenkins => GKE (snapshots) => GKE (Pods creating)
I would like Jenkins to communicate with GKE and get on creating a new set of Pods automatically, so that i won't have to intervene in any way.
Is there any way i can do something like this? This is just a concept for now, I'm not rushing on anything, just asking for some opinions on this.
Any ideas or suggestions?
Thank you
Your question is not specific to GKE. It's about Kubernetes. You probably need to write some code and use the Kubernetes API to achieve what you described, which is a pretty custom use case/workflow.
Some pointers on using the Kubernetes API: https://kubernetes.io/docs/reference/
Here is how you could do. For each new Jira task:
Create a new branch in the code repository, from the "live" one.
Take a backup of the database.
Deploy a new instance of the application with this branch and database backup. You probably want to use a cluster or a namespace per Jira task.
To deploy Kubernetes resources from Jenkins you can use the DevOps steps of the Kubernetes Pipeline plugin.
You can use Jobs to run the database backup and restore tasks.